Related
It's common to see a _var variable name in a class field. What does the underscore mean? Is there a reference for all these special naming conventions?
The underscore is simply a convention; nothing more. As such, its use is always somewhat different to each person. Here's how I understand them for the two languages in question:
In C++, an underscore usually indicates a private member variable.
In C#, I usually see it used only when defining the underlying private member variable for a public property. Other private member variables would not have an underscore. This usage has largely gone to the wayside with the advent of automatic properties though.
Before:
private string _name;
public string Name
{
get { return this._name; }
set { this._name = value; }
}
After:
public string Name { get; set; }
It is best practice to NOT use UNDERSCORES before any variable name or parameter name in C++
Names beginning with an underscore or a double underscore are RESERVED for the C++ implementers. Names with an underscore are reserved for the library to work.
If you have a read at the C++ Coding Standard, you will see that in the very first page it says:
"Don't overlegislate naming, but do use a consistent naming convention: There are only two must-dos: a) never use "underhanded names," ones that begin with an underscore or that contain a double underscore;" (p2 , C++ Coding Standards, Herb Sutter and Andrei Alexandrescu)
More specifically, the ISO working draft states the actual rules:
In addition, some identifiers are reserved for use by C ++ implementations and shall not be used otherwise; no diagnostic is required. (a) Each identifier that contains a double underscore __ or begins with an underscore followed by an uppercase letter is reserved to the implementation for any use. (b) Each identifier that begins with an underscore is reserved to the implementation for use as a name in the global namespace.
It is best practice to avoid starting a symbol with an underscore in case you accidentally wander into one of the above limitations.
You can see it for yourself why such use of underscores can be disastrous when developing a software:
Try compiling a simple helloWorld.cpp program like this:
g++ -E helloWorld.cpp
You will see all that happens in the background. Here is a snippet:
ios_base::iostate __err = ios_base::iostate(ios_base::goodbit);
try
{
__streambuf_type* __sb = this->rdbuf();
if (__sb)
{
if (__sb->pubsync() == -1)
__err |= ios_base::badbit;
else
__ret = 0;
}
You can see how many names begin with double underscore!
Also if you look at virtual member functions, you will see that *_vptr is the pointer generated for the virtual table which automatically gets created when you use one or more virtual member functions in your class! But that's another story...
If you use underscores you might get into conflict issues and you WILL HAVE NO IDEA what's causing it, until it's too late.
Actually the _var convention comes from VB not C# or C++ (m_,... is another thing).
This came to overcome the case insensitivity of VB when declaring Properties.
For example, such code isn't possible in VB because it considers user and User as the same identifier
Private user As String
Public Property User As String
Get
Return user
End Get
Set(ByVal Value As String)
user = value
End Set
End Property
So to overcome this, some used a convention to add '_' to the private field to come like this
Private _user As String
Public Property User As String
Get
Return _user
End Get
Set(ByVal Value As String)
_user = value
End Set
End Property
Since many conventions are for .Net and to keep some uniformity between C# et VB.NET convention, they are using the same one.
I found the reference for what I was saying :
http://10rem.net/articles/net-naming-conventions-and-programming-standards---best-practices
Camel Case with Leading Underscore. In
VB.NET, always indicate "Protected" or
"Private", do not use "Dim". Use of
"m_" is discouraged, as is use of a
variable name that differs from the
property by only case, especially with
protected variables as that violates
compliance, and will make your life a
pain if you program in VB.NET, as you
would have to name your members
something different from the
accessor/mutator properties. Of all
the items here, the leading underscore
is really the only controversial one.
I personally prefer it over straight
underscore-less camel case for my
private variables so that I don't have
to qualify variable names with "this."
to distinguish from parameters in
constructors or elsewhere where I
likely will have a naming collision.
With VB.NET's case insensitivity, this
is even more important as your
accessor properties will usually have
the same name as your private member
variables except for the underscore.
As far as m_ goes, it is really just
about aesthetics. I (and many others)
find m_ ugly, as it looks like there
is a hole in the variable name. It's
almost offensive. I used to use it in
VB6 all the time, but that was only
because variables could not have a
leading underscore. I couldn't be
happier to see it go away. Microsoft
recommends against the m_ (and the
straight _) even though they did both
in their code. Also, prefixing with a
straight "m" is right out. Of course,
since they code mainly in C#, they can
have private members that differ only
in case from the properties. VB folks
have to do something else. Rather than
try and come up with
language-by-language special cases, I
recommend the leading underscore for
all languages that will support it. If
I want my class to be fully
CLS-compliant, I could leave off the
prefix on any C# protected member
variables. In practice, however, I
never worry about this as I keep all
potentially protected member variables
private, and supply protected
accessors and mutators instead. Why:
In a nutshell, this convention is
simple (one character), easy to read
(your eye is not distracted by other
leading characters), and successfully
avoids naming collisions with
procedure-level variables and
class-level properties.class-level properties.
_var has no meaning and only serves the purpose of making it easier to distinguish that the variable is a private member variable.
In C++, using the _var convention is bad form, because there are rules governing the use of the underscore in front of an identifier. _var is reserved as a global identifier, while _Var (underscore + capital letter) is reserved anytime. This is why in C++, you'll see people using the var_ convention instead.
The first commenter (R Samuel Klatchko) referenced: What are the rules about using an underscore in a C++ identifier? which answers the question about the underscore in C++. In general, you are not supposed to use a leading underscore, as it is reserved for the implementer of your compiler. The code you are seeing with _var is probably either legacy code, or code written by someone that grew up using the old naming system which didn't frown on leading underscores.
As other answers state, it used to be used in C++ to identify class member variables. However, it has no special meaning as far as decorators or syntax goes. So if you want to use it, it will compile.
I'll leave the C# discussion to others.
You can create your own coding guidelines. Just write a clear documentation for the rest of the team.
Using _field helps the Intelilsense to filter all class variables just typing _.
I usually follow the Brad Adams Guidelines, but it recommends to not use underscore.
With C#, Microsoft Framework Design Guidelines suggest not using the underscore character for public members. For private members, underscores are OK to use. In fact, Jeffrey Richter (often cited in the guidelines) uses an m_ for instance and a "s_" for private static memberss.
Personally, I use just _ to mark my private members. "m_" and "s_" verge on Hungarian notation which is not only frowned upon in .NET, but can be quite verbose and I find classes with many members difficult to do a quick eye scan alphabetically (imagine 10 variables all starting with m_).
Old question, new answer (C#).
Another use of underscores for C# is with ASP NET Core's DI (dependency injection). Private readonly variables of a class which got assigned to the injected interface during construction should start with an underscore. I guess it's a debate whether to use underscore for every private member of a class (although Microsoft itself follows it) but this one is certain.
private readonly ILogger<MyDependency> _logger;
public MyDependency(ILogger<MyDependency> logger)
{
_logger = logger;
}
EDIT:
Microsoft adopted use of underscores for all private members of a class for a while now.
The Microsoft naming standard for C# says variables and parameters should use the lower camel case form IE: paramName. The standard also calls for fields to follow the same form but this can lead to unclear code so many teams call for an underscore prefix to improve clarity IE: _fieldName.
I use the _var naming for member variables of my classes. There are 2 main reasons I do:
1) It helps me keep track of class variables and local function variables when I'm reading my code later.
2) It helps in Intellisense (or other code-completion system) when I'm looking for a class variable. Just knowing the first character is helpful in filtering through the list of available variables and methods.
There is a fully legit reason to use it in C#: if the code must be extensible from VB.NET as well.
(Otherwise, I would not.)
Since VB.NET is is case insensitive, there is no simple way to access the protected field member in this code:
public class CSharpClass
{
protected int field;
public int Field { get { return field; } }
}
E.g. this will access the property getter, not the field:
Public Class VBClass
Inherits CSharpClass
Function Test() As Integer
Return Field
End Function
End Class
Heck, I cannot even write field in lowercase - VS 2010 just keeps correcting it.
In order to make it easily accessible to derived classes in VB.NET, one has to come up with another naming convention. Prefixing an underscore is probably the least intrusive and most "historically accepted" of them.
As far as the C and C++ languages are concerned there is no special meaning to an underscore in the name (beginning, middle or end). It's just a valid variable name character. The "conventions" come from coding practices within a coding community.
As already indicated by various examples above, _ in the beginning may mean private or protected members of a class in C++.
Let me just give some history that may be fun trivia. In UNIX if you have a core C library function and a kernel back-end where you want to expose the kernel function to user space as well the _ is stuck in front of the function stub that calls the kernel function directly without doing anything else. The most famous and familiar example of this is exit() vs _exit() under BSD and SysV type kernels: There, exit() does user-space stuff before calling the kernel's exit service, whereas _exit just maps to the kernel's exit service.
So _ was used for "local" stuff in this case local being machine-local. Typically _functions() were not portable. In that you should not expect same behaviour across various platforms.
Now as for _ in variable names, such as
int _foo;
Well psychologically, an _ is an odd thing to have to type in the beginning. So if you want to create a variable name that would have a lesser chance of a clash with something else, ESPECIALLY when dealing with pre-processor substitutions you want consider uses of _.
My basic advice would be to always follow the convention of your coding community, so that you can collaborate more effectively.
It's simply means that it's a member field in the class.
There's no particular single naming convention, but I've seen that for private members.
Many people like to have private fields prefixed with an underscore. It is just a naming convention.
C#'s 'official' naming conventions prescribe simple lowercase names (no underscore) for private fields.
I'm not aware of standard conventions for C++, although underscores are very widely used.
It's just a convention some programmers use to make it clear when you're manipulating a member of the class or some other kind of variable (parameters, local to the function, etc). Another convention that's also in wide use for member variables is prefixing the name with 'm_'.
Anyway, these are only conventions and you will not find a single source for all of them. They're a matter of style and each programming team, project or company has their own (or even don't have any).
Now the notation using "this" as in this.foobarbaz is acceptable for C# class member variables. It replaces the old "m_" or just "__" notation. It does make the code more readable because there is no doubt what is being reference.
From my experience (certainly limited), an underscore will indicate that it is a private member variable. As Gollum said, this will depend on the team, though.
A naming convention like this is useful when you are reading code, particularly code that is not your own. A strong naming convention helps indicate where a particular member is defined, what kind of member it is, etc. Most development teams adopt a simple naming convention, and simply prefix member fields with an underscore (_fieldName). In the past, I have used the following naming convention for C# (which is based on Microsofts conventions for the .NET framework code, which can be seen with Reflector):
Instance Field: m_fieldName
Static Field: s_fieldName
Public/Protected/Internal Member: PascalCasedName()
Private Member: camelCasedName()
This helps people understand the structure, use, accessibility and location of members when reading unfamiliar code very rapidly.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Normally when I have a private field inside a class or a struct, I use camelCasing, so it would be obvious that it's indeed private when you see the name of it, but in some of my colleagues' C# code, I see that they use m_ mostly or sometimes _, like there is some sort of convention.
Aren't .NET naming conventions prevent you from using underscores for member names?
And when you mention the MS naming conventions or what not, they tell you theirs is the best way, but don't explain the reasoning behind it.
Also when I am the owner of some code, where I clearly use camelCasing for private members, when they have to make a minor modification to the code, they stick in their conventions instead of following whatever conventions are there.
Is this a controversy?
The .Net framework guidelines allow for a _ or m_ prefix on private field names because the provide no guidance on private fields. If you look at the BCL in reflector you'll notice a prefix is the most prevalent pattern.
The Reference page for naming fields is located here. Notice the guidelines only specify usage for public and protected fields. Private fields are simply not covered.
Technically, underscores are a violation of .NET conventions (or at least used to be -- see comment thread), but Microsoft programmers themselves often use underscores, and many examples in the documentation use underscores. I think it's very helpful to be able to see at a glance which variables are member variables (fields) and which are local. The underscore really helps with this. It also nicely separates private member variables from local variables in intellisense.
Please see this very useful page for .NET naming conventions:
http://10rem.net/articles/net-naming-conventions-and-programming-standards---best-practices
And here's a page with Microsoft's official recommendations:
https://msdn.microsoft.com/en-us/library/ms229045%28v=vs.110%29.aspx
I typically prefix private member variables with an underscore.
It just makes them easier to spot when you're trying to read the code and it's allowed by the Microsoft Guidelines:
public class Something
{
private string _someString = "";
public string SomeString
{
get
{
return _someString;
}
set
{
// Some validation
_someString = value;
}
}
}
Like others have said, the more important thing is to be consistent. If you're on a team that has a coding standard that does things the m_ way, don't try to be a rebel and do yours another. It will just make things more difficult for everybody else.
well, microsoft is not taking part for any of the 2 options.
If on Visual Studio you encapsulate a field using "refactor->Encapsulate Field..." for
private string _myVar
and
private string myVar
both of them generates a propery like this
public string MyVar
{
get { return myVar; }
set { myVar = value; }
}
So for microsoft it's the same :-) It's only a question of reaching an agreement with the development team so everyone uses the same approach.
Normally I never use private fields except very specific situations. I encapsulate private fields with protected properties. Better for inheritance and more clear IMHO.
Even in the BCL you see a lot of inconsistency with naming conventions, some classes have "_", some "m_" and some just the pascal case version of the property.
Underscore is good because you prevent accidental stackoverflows, although more recent versions of Visual Studio warn you about this anyway. They also appear first in your intellisense, avoiding the need to riddle your code with this.someProperty or search through the entire list.
As long as the team agrees on one standard it doesn't make a whole lot of difference, but having used underscores for 5+ years I personally wouldn't want to return back to the alternatives.
If you own the codebase and maintain it, I would insist they use your standards. If they don't then simple refactor it combined with a polite email why you've done it.
There are two MSDN articles (here and here) about design guidelines that also contain naming conventions. Too bad they are restricted to "publically visible" things. They don't offer guidelines for naming non-public things and as far as I know Microsoft doesn't provide official naming guidelines for non-publics.
StyleCop (a Microsoft tool) is against using underscores in names. Two reasons I have heard from developers why they prefer to use the underscore:
it clearly marks non-public members (type _ and intellisense will show you all the non-publics).
it prevents conflicts between local variables (which are often also written in camelCase), method parameters and non-public fields.
IMO both are a good reason to use the underscore, however I don't like how it makes my code look so I don't use it either. I prefer to use only camelCase when possible and I add a this. in case of conflicts with local variables or method parameters.
We simply try to keep the coding style consistent within the team and project.
Please see the last paragraph of the MS Field Usage Guidelines.
Do not apply a prefix to field names or static field names. Specifically, do not apply a prefix to a field name to distinguish between static and nonstatic fields. For example, applying a g_ or s_ prefix is incorrect.
No conventions prevent you from using valid identifier names. The important thing is to be consistent. I use "_" for all private variables, although the "right way" (for example ReSharper) seems to want you to declare them starting with a lowercase letter and differentiate between parameters and members trough the use of "this."
I don't really believe there is any BEST way to case variables and methods. What matters is that you and your team are consistent. The .NET naming conventions are great the way Microsoft specifies them, but some people prefer other conventions...
As a personal aside, I tent to prefix private variables and methods with "_" and then camel casing, protected variables and methods in camel casing and public variables and methods with pascal casing, but that is just me.
Yes, the naming convention enforced by StyleCop (which enforces the MS coding rules) is 'no underscores, camel case' for private instance fields.
It is of note that constant/static readonly fields have the 'Pascal case' naming convention (must begin with uppercase but not be screaming caps).
The other naming conventions are holdovers from C++ style, which was the initial style used to code C# in since that's where the C# team came from.
Important Note: Whether or not you use this coding style is entirely up to the development team. It's far more important that everyone on the team use the same style than that any particular style be used.
OTOH, MS chose this style after much deliberation, so I use it as a tiebreaker. If there's no particular reason to go one way or another with a coding style, I go the way StyleCop goes.
That is the question? So how big a sin is it not to use this convention when developing a c# project? This convention is widely used in the .NET class library. However, I am not a fan to say the least, not just for asthetic reasons but I don't think it makes any contribution. For example is IPSec an interface of PSec? Is IIOPConnection An interface of IOPConnection, I usually go to the definition to find out anyway.
So would not using this convention cause confusion?
Are there any c# projects or libraries of note that drop this convention?
Do any c# projects that mix conventions, as unfortunately Apache Wicket does?
The Java class libraries have existed without this for many years, I don't feel I have ever struggled to read code without it. Also, should the interface not be the most primitive description? I mean IList<T> as an interface for List<T> in c#, is it not better to have List<T> and LinkedList<T> or ArrayList<T> or even CopyOnWriteArrayList<T>? The classes describe the implementation? I think I get more information here, than I do from List<T> in c#.
The difference between Java and C# is that Java allows you to easily distinguish whether you implement an interface or extend a class since it has the corresponding keywords implements and extends.
As C# only has the : to express either an implementation or extension, I recommend following the standard and put an I before an interface's name.
It's bad practice in my opionion too. The reasons why, additional to yours are:
The whole purpose of interfaces is to abstract away implementation details. So it shouldn't matter if you call a method with a IParam or Param.
Elaborated tools have their own possibilities to mark interfaces with an icon.
If your eye is searching in a IDE for a name, the most significant part is the beginning of a string. Maybe your classes get sorted by alphabet, and now you have a block of similar names, all starting with I... together. They look similar, while it would be of advantage to distinguish them easily. It's ergonomical wrong to use an I-prefix.
Even more annoying: ImplList, ImplThat, AFoo for an abstract Foo, AImplFooBar for an abstract Foo, which implements Bar? SSomething as Singleton, or SMath for a static class? Stop it! :)
With respect, in your post you are only considering your needs (I, I, I), and not the needs of the readers of your code. If you are a one-man shop, then fair enough, but if your code if ever read by others, then consider that they will be expecting interfaces to have an I prefix--that is just the way it is in .Net, and too many people are used to it to change now.
Also, it would help if you used more readable names for classes. What is PSec? How can I tell whether IPSec is an interface, when I can't even tell what PSec is? If instead PSec was renamed to e.g., PersonalSecurity, then IPersonalSecurity is much more likely to be an interface.
Using I for interfaces goes against the whole point of an interface imo, that it is a connector that you can plug different concrete implementations in to dependencies.
An object that uses the database needs a DataStore, not an IDataStore, and it should be up to configuration whether that gets a DatabaseDataStore or a FileSystemDataStore or whatever plugged into it (or a MockDataStore for testing).
Read this and move on. If you're using Java, follow the Java naming conventions.
It's not a sin per se, it's best practice. It makes things a lot more readable all in all. Also, think about it. IMyClass is the interface to MyClass. It just makes sense, and stops unnecessary confusion. Also remember the : syntax vs. implements/extends. Lastly, you can bypass all of this by simply checking the tooltips/go to in VS, but for pure readability, the standard is important in my opinion.
Not that I'm aware of, but I'm sure they exist.
Haven't seen any, but I'm sure they exist.
I think the main reason for the I-Prefix is not that those using it can see it's an interface but that those implementing/deriving from existing classes and interfaces can see more easily wether it's an interface or base class.
Another advantage is that it prevents stupid things like (If my Java memory serves me correctly):
List foo = new List(); // Why does it fail?
The third advantage is refactoring. If you move through your objects and read the code you can see where you forgot to code-by-interface. "A method accepting something with a type not prefixed with I? Fix it!".
I used it even in Java and found it quite usefull, but it always depends on the guidelines for your company/team. Follow them, no matter how stupid you may think they are, some day you will be happy they exist.
Ask yourself: If my IDE could give me some hint in the text (e.g different colour, underline, italic...) that the type was an interface would I still bother?
Sounds like you are naming the types like that just so you can tell from the name something about parts of the definition other than the name.
Best practices override convention sometimes, in my opinion. While I may not personally like the convention, not using it goes against the best practice that has been in place for longer than I care to think about.
I would look at it more from the point of how other people do it, in this case. Since 99% of the common world will be prefacing with the "I", that is good enough to keep this best practice. If you have to bring in a contractor or on-board a new developer, you should be able to focus on the code and not have to explain/defend choices that you made.
It has been around long enough, and is ingrained well enough, that I don't expect it to change in my lifetime. It is just one of those "unwritten rules", better defined as an "unwritten best practice", that will probably outlive me.
I would say that not following this convention would get you down to .NET hell. It's a convention that's almost as important to me as using self in instance methods in Python.
I don't see any good reason to do this. 'Extends' vs 'implements' already tells you whether you are dealing with a class or an interface in the cases where it actually matters. In all other cases the whole idea is that you don't care.
In my opinion the biggest reason "I" is often prefixed is that the IDEs for both Java (Eclipse) and .NET (V Studio) do not make it extremely clear that the Class you are looking at is in fact an interface. The package browser in eclipse shows the same icon till you expand the class file and the font of an Interface declaration is not any different than a class.
An Example would be if I type:
ISomeInterface s = factory.create();
ISomeInterface should atleast have some sort of font modification to show that its an interface (like italics or underline).
The other big reason is in the Java world that people prefix with "I" is that it makes it easier in Eclipse to do a "Ctrl-Shift-R" and search for only interfaces.
This is important in the Java/Spring world where you need interfaces as your collaborators if you plan on using any AOP magic or some other Dynamic proxies.
Than you have the nasty choice of either prefixing your interface with "I" or suffixing your implementation class with "Impl" like ListImpl. I abhor the suffixing of classes with "Impl" to make the interface and concrete differ in name and prefix the prefix of "I".
In general I try to avoid making lots of interfaces.
In my own code I would never prefix with "I". I'm only give some reasons why people do it which is for old code consistency.
conventions exist to help all of us. If there is a chance another .net developer will be working with you then yes, follow the convention.
One idea is that the "I" part can be followed by a verb, stating what classes that implement the interface does; like ISaveXmlData, forming a nice human language name.
The key thing is consistency - as long you stick to having I prefixed to all interfaces or none at all, it's a matter of preference.
I use the I prefix for interfaces at work since the existing code already uses it for a naming convention for each interface. I find it more intuitive to quickly determine if a class implements an interface or another class simply by looking for the I prefix in the name of the base class.
On the other hand, some of the older projects at work don't use this naming convention and this makes the code slightly less readable, but it might just be that I'm used to the prefix.
Look at the BCL. In the Base Class Libraries you have IList<>, IQueryable, IDisposable.
If you don't prepend it with a 'I', how would people know it's an interface other than going to the definition?
Anyways, just my 2 cents
You can choose all names in your program how you like, but it's a good idea to hold naming conversion, if not you only will be read the program.
Usage of Interfaces is good not only if you design you own classes and interfaces. In some cases you makes other accents in your program it you use interfaces. For example, you can write code like
SqlDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
or like
IDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
the last one have looks like the same, but if you are using IDataReader instead of SqlDataReader you can easier to place some parts which works with dr in a method, which works not only with SqlDataReader class (but with OleDbDataReader, OracleDataReader, OdbcDataReader etc). On the other hand your program stay working exactly quick as before.
Updated (based on questions from comments):
The advantage is, like I written before, if you'll separate some parts of you code which work with IDataReader. For example, you can define delegate T ReadRowFromDataReader<T> (IDataReader dr, ...) and use it inside of while (dr.Read ()) block. So you write code which is more general as the code working with SqlDataReader directly. Inside of while (dr.Read ()) block you call rowReader (dr, ...). Your different implementations of code reading rows of data can be placed in a method with signature ReadRowFromDataReader<T> rowReader and place it as a actual parameter.
With the way you can write more independent code working with database. At the first time probably usage of generic delegate looks a little complex, but all code will be really easy to read. I want to accentuate one more time, that you really receive some advantages of using interfaces in this case only if you separate some parts of the code in another method. If you don't separate the code, the only advantage which you receive is: you receive code parts which are written more independend and you could copy and paced this parts easier in another program.
Usage of names started with 'I' makes easier to understand that now we are working with something more general as with one class.
I stick to the convention only because I have to, if I am to use any interfaces in the BCL and maintain consistency.
I don't like the convention, either.
Cannot believe it that so many people hate the 'I' prefix. I love the prefix 'I'.
Here is why:
Are abstract and interface different? Yes
Do I care the difference as a developer? Yes, but not always.
When do I need to care?
Design discussion(When I draw on the board, prefix 'I' clearly telling everyone it's an interface)
Read existing code(When I see prefix 'I', clearly I know it's an interface. There'are exceptions for words start with 'I', but very few cases)
Do I always need 'I'? No. But I want consistency, so YES.
With just one prefix 'I', it avoids so much communication overhead.
I think the real question in case of .NET should be: why do we ever need to distinguish between a class and an interface in a client code?
And for the C# & .NET there is a shameful answer - because someone invented an explicit interface implementations language support. A thing that is in my opinion a complete mess, because it allows to break a Single Responsibility Principle in an invisible way to the caller. Lets assume we have an IList interface and a List class.
This is only by convention that List.Count() does the same thing as IList.Count() does for the class. Normally you can't be so sure. As for me explicit interface implementation is a hidden form of method overloading done in the most wrong way ever. Let's assume like in old native languages that the instance reference is a first argument of a called method.
Now we have int Count(IList list) and int Count(List list). From the language point of view these are two separate methods that clearly advertise their responsibility - one can work with a more abstract IList, and another with the specific implementation List. But this is clearly visible here! No one would expect that both methods return the same value, because the more specific method may retrieve extra properties etc. It is however non obvious in the C# language in an explicit interface implementation form, because the caller is non aware which form is actually used - compiler knows, but I as a programmer might be unaware.
Unless I know if I call a class method or an interface method! I think it is a source of this somehow stupid convention for interfaces. If you use types named without the "I" prefix - especially in method arguments and return types - you may be unaware of whether you call a class instance method or an interface method.
As a good programmer using SOLID principles you should work with interfaces all the time - as long it is possible, especially if you are aware of explicit implementations.
This is in my opinion a hidden purpose of naming C# interfaces is this way - to cover the bad design of explicit interface implementations. You may not agree, but think twice about it - how could you ever make a method overloading feature that is effectively hidden from the calling site without expecting that a naming convention will naturally appear in order to manage it?
This question already has answers here:
Closed 14 years ago.
Okay, this may be a dumb question, but I've not been able to find any information on it.
Are String.Empty and string.Empty the same? I always find myself gravitating towards using the upper case version (String.Empty) because I prefer the color and look of it in my IDE than the lower case version (string.Empty)...
Is there a "correct" way to use these that differ or is it entirely down to personal preference? It was my assumption that they're both the same, but to be honest, I never gave it any thought until for whatever reason today I wondered "If they both exist, they must both exist for a reason".
Is there a reason that anyone knows of? If so, what is it? Can anyone enlighten me?
P.S. The "exact duplicates" only answer half of the question - "which is right?", not the "why do they both exist?"
Exact Duplicate: What is the difference between String and string in C#?
Exact Duplicate: String vs string in C#
In C#, lower-case type names are aliases for the System.xxx type names, e.g. string equals System.String and int equals System.Int32.
It's best practice to use these language aliases for the type names instead of their framework equivalent, for the sake of consistency. So you're doing it wrong. ;-)
As for a reason why they both exist, the .NET types exist because they are defined in a language-independent standard for the .NET libraries called CTS (common type system). Why C# defines these aliases is beyond me (VB does something quite similar). I guess the two reasons are
Habit. Get all these C and Java programmers to use C# by providing the same type names for some fundamental types.
Laziness: You don't have to import the System namespace to use them.
EDIT Since many people seem to prefer the other notation let me point out that this is by no means unreasonable. A good case can actually be made for the usage of the CTS type names rather than C#'s keywords and some superficially good arguments are offered in the other answers. From a purity/style point of view I would probably concur.
However, consider if this is worth breaking a well-established convention that helps to unify code across projects.
It is conceptually similar to something like this:
using int=System.Int32
string is mapped to the String class AFAIK, so they're the same.
The same is true for, for example int and Int32.
Personally, I prefer to use String as both String and Object are references whereas all the other base types are value types. In my mind, that's the clearest separation.
They are both the same.
Personally I prefer using the lowercase string, the "blue one", using the C# keyword instead of the .NET class name for the same reason I'm using int instead of Int32.
Also, the lowercased one doesn't require inclusion of the System namespace...
In C# language when you refer to an array element you can write:
myclass.my_array['element_name'] = new Point(1,1);
I think about refering to a element with name element_name by using dot in place of backets:
myclass.my_array.element_name = new Point(1,1);
Do you know any language where exists similar syntax to the example above?
What do you think about this example of refering to a array element? Is this good or is it as bad as my writing in english skills?
Kind regards
JavaScript does exactly what you describe. In JavaScript, every object is just a map of property names to values. The bracket operator just returns a property by name (or by value in the case of an integer index). You can refer to named properties by just using a dot, though you can't do that with integer indicies. This page describes the bracket operator and dot notation in detail.
You could almost certainly do this in any dynamic language, by handling property/variable access as an indexer if the specified property/variable didn't actually exist. I suspect that many dynamic languages already provide this functionality in some areas.
It's possible that in C# 4 you'll be able to make your objects behave like this if you really want to.
However, I would agree with the implicit judgement in Mohit's question - I see no reason to consider this more generally readable than using the more common indexer syntax, and it will confuse people who are used to indexers looking like indexers.
One area where I would quite possibly do something like this would be for an XML API, where a property would indicate "take the first element of the given name":
XElement author = doc.Root.Posts.Author;
That's quite neat - for the specific cases where it's what you want. Just don't try to apply it too generally...
REXX has the concept of stems, where you can say x.1, x.2 x.bob and these refer to array elements x[1], x[2] and x['bob'] respectively.
In addition LotusScript (in Notes) allows you to process the Notes databases in this fashion, where db.get("fieldname") is the same as db.fieldname.
I've used the REXX one a bit (as there's no choice) but when I've coded for Notes databases, I've preferred the get/put way of doing things.
Lua tables have a.x as syntactic sugar for a["x"]. Lua tables are associative arrays that could be used to represent arrays, hashes, and records of other languages. The sugar helps making code more readable by illustrating the intention (Record? Array? Hashtable?), though it makes no difference to the language.
What would be the advantage of such a syntax for you?
If you have fix names why not create a class with properties?
Maybe you are looking for a class or struct if you you want to use the element name as a field/property.