Constant abuse? - c#

I have run across a bunch of code in a few C# projects that have the following constants:
const int ZERO_RECORDS = 0;
const int FIRST_ROW = 0;
const int DEFAULT_INDEX = 0;
const int STRINGS_ARE_EQUAL = 0;
Has anyone ever seen anything like this? Is there any way to rationalize using constants to represent language constructs? IE: C#'s first index in an array is at position 0. I would think that if a developer needs to depend on a constant to tell them that the language is 0 based, there is a bigger issue at hand.
The most common usage of these constants is in handling Data Tables or within 'for' loops.
Am I out of place thinking these are a code smell? I feel that these aren't a whole lot better than:
const int ZERO = 0;
const string A = "A";

Am I out of place thinking these are a code smell? I feel that these aren't a whole lot better than:
Compare the following:
if(str1.CompareTo(str2) == STRINGS_ARE_EQUAL) ...
with
if(str1.CompareTo(str2) == ZERO) ...
if(str1.CompareTo(str2) == 0) ...
Which one makes more immediate sense?

Abuse, IMHO. "Zero" is just is one of the basics.
Although the STRINGS_ARE_EQUAL could be easy, why not ".Equals"?
Accepted limited use of magic numbers?

That definitely a code smell.
The intent may have been to 'add readability' to the code, however things like that actually decrease the readability of code in my opinion.

Some people consider any raw number within a program to be a 'magic number'. I have seen coding standards that basically said that you couldn't just write an integer into a program, it had to be a const int.

Am I out of place thinking these are a code smell? I feel that these aren't a whole lot better than:
const int ZERO = 0;
const int A = 'A';
Probably a bit of smell, but definitely better than ZERO=0 and A='A'. In the first case they're defining logical constants, i.e. some abstract idea (string equality) with a concrete value implementation.
In your example, you're defining literal constants -- the variables represent the values themselves. If this is the case, I would think that an enumeration is preferred since they rarely are singular values.

That is definite bad coding.
I say constants should be used only where needed where things could possible change sometime later. For instance, I have a lot of "configuration" options like SESSION_TIMEOUT defined where it should stay the same, but maybe it could be tweaked later on down the road. I do not think ZERO can ever be tweaked down the road.
Also, for magic numbers zero should not be included.
I'm a bit strange I think on that belief though because I would say something like this is going to far
//input is FIELD_xxx where xxx is a number
input.SubString(LENGTH_OF_FIELD_NAME); //cut out the FIELD_ to give us the number

You should have a look at some of the things at thedailywtf
One2Pt20462262185th
and
Enterprise SQL

I think sometimes people blindly follow 'Coding standards' which say "Don't use hardcoded values, define them as constants so that it's easier to manage the code when it needs to be updated' - which is fair enough for stuff like:
const in MAX_NUMBER_OF_ELEMENTS_I_WILL_ALLOW = 100
But does not make sense for:
if(str1.CompareTo(str2) == STRINGS_ARE_EQUAL)
Because everytime I see this code I need to search for what STRINGS_ARE_EQUAL is defined as and then check with docs if that is correct.
Instead if I see:
if(str1.CompareTo(str2) == 0)
I skip step 1 (search what STRINGS_ARE... is defined as) and can check specs for what value 0 means.
You would correctly feel like replacing this with Equals() and use CompareTo() in cases where you are interested in more that just one case, e.g.:
switch (bla.CompareTo(bla1))
{
case IS_EQUAL:
case IS_SMALLER:
case IS_BIGGER:
default:
}
using if/else statements if appropriate (no idea what CompareTo() returns ...)
I would still check if you defined the values correctly according to specs.
This is of course different if the specs defines something like ComparisonClass::StringsAreEqual value or something like that (I've just made that one up) then you would not use 0 but the appropriate variable.
So it depends, when you specifically need to access first element in array arr[0] is better than arr[FIRST_ELEMENT] because I will still go and check what you have defined as FIRST_ELEMENT because I will not trust you and it might be something different than 0 - for example your 0 element is dud and the real first element is stored at 1 - who knows.

I'd go for code smell. If these kinds of constants are necessary, put them in an enum:
enum StringEquality
{
Equal,
NotEqual
}
(However I suspect STRINGS_ARE_EQUAL is what gets returned by string.Compare, so hacking it to return an enum might be even more verbose.)
Edit: Also SHOUTING_CASE isn't a particularly .NET-style naming convention.

i don't know if i would call them smells, but they do seem redundant. Though DEFAULT_INDEX could actually be useful.
The point is to avoid magic numbers and zeros aren't really magical.

Is this code something in your office or something you downloaded?
If it's in the office, I think it's a problem with management if people are randomly placing constants around. Globally, there shouldn't be any constants unless everyone has a clear idea or agreement of what those constants are used for.
In C# ideally you'd want to create a class that holds constants that are used globally by every other class. For example,
class MathConstants
{
public const int ZERO=0;
}
Then in later classes something like:
....
if(something==MathConstants.ZERO)
...
At least that's how I see it. This way everyone can understand what those constants are without even reading anything else. It would reduce confusion.

There are generally four reasons I can think of for using a constant:
As a substitute for a value that could reasonably change in the future (e.g., IdColumnNumber = 1).
As a label for a value that may not be easy to understand or meaningful on its own (e.g. FirstAsciiLetter = 65),
As a shorter and less error-prone way of typing a lengthy or hard to type value (e.g., LongSongTitle = "Supercalifragilisticexpialidocious")
As a memory aid for a value that is hard to remember (e.g., PI = 3.14159265)
For your particular examples, here's how I'd judge each example:
const int ZERO_RECORDS = 0;
// almost definitely a code smell
const int FIRST_ROW = 0;
// first row could be 1 or 0, so this potentially fits reason #2,
// however, doesn't make much sense for standard .NET collections
// because they are always zero-based
const int DEFAULT_INDEX = 0;
// this fits reason #2, possibly #1
const int STRINGS_ARE_EQUAL = 0;
// this very nicely fits reason #2, possibly #4
// (at least for anyone not intimately familiar with string.CompareTo())
So, I would say that, no, these are not worse than Zero = 0 or A = "A".

If the zero indicates something other than zero (in this case STRINGS_ARE_EQUAL) then that IS Magical. Creating a constant for it is both acceptable and makes the code more readable.
Creating a constant called ZERO is pointless and a waste of finger energy!

Smells a bit, but I could see cases where this would make sense, especially if you have programmers switching from language to language all the time.
For instance, MATLAB is one-indexed, so I could imagine someone getting fed up with making off-by-one mistakes whenever they switch languages, and defining DEFAULT_INDEX in both C++ and MATLAB programs to abstract the difference. Not necessarily elegant, but if that's what it takes...

Right you are to question this smell young code warrior. However, these named constants derive from coding practices much older than the dawn of Visual Studio. They probably are redundant but you could do worse than to understand the origin of the convention. Think NASA computers, way back when...

You might see something like this in a cross-platform situation where you would use the file with the set of constants appropriate to the platform. But Probably not with these actual examples. This looks like a COBOL coder was trying to make his C# look more like english language (No offence intended to COBOL coders).

It's all right to use constants to represent abstract values, but quite another to represent constructs in your own language.
const int FIRST_ROW = 0 doesn't make sense.
const int MINIMUM_WIDGET_COUNT = 0 makes more sense.
The presumption that you should follow a coding standard makes sense. (That is, coding standards are presumptively correct within an organization.) Slavishly following it when the presumption isn't met doesn't make sense.
So I agree with the earlier posters that some of the smelly constants probably resulted from following a coding standard ("no magic numbers") to the letter without exception. That's the problem here.

Related

Local variables or direct statements?

I am currently studying C# and I really want to get a good coding style from the beginning, so I would like to hear opinions from you professionals on this matter.
Should you always (or mostly) use local variables for conditions/calculations (example 2) or is it just as good/better to use statements directly (example 1)
Example 1.
if (double.TryParse(stringToParse, out dblValue)) ...
Example 2.
bool parseSuccess = double.TryParse(stringToParse, out dblValue);
if (parseSuccess) ...
It would be interesting to hear your thoughts and reasoning at this example.
You should use the more verbose style if putting it all in one line would make it too long or complicated.
You should also use a separate variable if the variable's name would make it easier to understand the code:
bool mustWait = someCommand.ConflictsWith(otherCommand);
if (mustWait) {
...
}
In such cases, you should consider using an enum for additional readability.
I see a lot of example 1 in production code. As long as the expression is simple, and it's easy to understand the logic of what's happening, I don't think you'll find many people who think it is bad style.
Though you will probably find a lot of people with different preferences. :)
Heres the rule I use: Keep it on one line if you can quickly glance over it and know exactly what it's saying. If its too complicated to read as quickly as you could read any other text, give it a local variable. In any case, though, you don't want a really long if statement header. So if it's too long, split it up.
I suggest you use a local variable like here:
bool parseSuccess = double.TryParse(stringToParse, out dblValue);
if (parseSuccess) ...
For two reasons:
1. You can use more times the variable without parse your double another time.
2. It makes the code more readable.
Consider this:
if(double.TryParse(string1, out Value1) && double.TryParse(string2, out Value2) && double.TryParse(string3, out Value3) && double.TryParse(string4, out Value4))
{
//some stuff
}
It's too long and it makes the code hard to be read.
So sometimes local variabels make the code a lot more readable.
The clarity of the source code is an important parameter especially in application maintenance but so is performance.
Insignificant it may seem, sometimes using simple syntax "tricks" of programming languages​​, we get very good results.
If I think I'll use later in the code somehow the result, I use variables, otherwise I give priority to direct sentences.
There's no right option. Both are perfectly acceptable.
Most people choose the first option if you don't have a lot of conditions to concatenate because it results in less lines of code.
As you said you are studying C#
So my vote will be this style for you
bool parseSuccess = double.TryParse(stringToParse, out dblValue);
if (parseSuccess) ...
If you are studying you will have lot to learn and the above
style clearly tells you that TryParse return a bool, so you won't have
to worry or find whats the return type for TryParse

List of const int instead of enum

I started working on a large c# code base and found the use of a static class with several const ints fields. This class is acting exactly like an enum would.
I would like to convert the class to an actual enum, but the powers that be said no. The main reason I would like to convert it is so that I could have the enum as the data type instead of int. This would help a lot with readability.
Is there any reason to not use enums and to use const ints instead?
This is currently how the code is:
public int FieldA { get; set; }
public int FieldB { get; set; }
public static class Ids
{
public const int ItemA = 1;
public const int ItemB = 2;
public const int ItemC = 3;
public const int ItemD = 4;
public const int ItemE = 5;
public const int ItemF = 6;
}
However, I think it should be the following instead:
public Ids FieldA { get; set; }
public Ids FieldB { get; set; }
I think many of the answers here ignore the implications of the semantics of enums.
You should consider using an enum when the entire set of all valid values (Ids) is known in advance, and is small enough to be declared in program code.
You should consider using an int when the set of known values is a subset of all the possible values - and the code only needs to be aware of this subset.
With regards to refactoring - when time and business contraints allow, it's a good idea to clean code up when the new design/implementation has clear benefit over the previous implementation and where the risk is well understood. In situations where the benefit is low or the risk is high (or both) it may be better to take the position of "do no harm" rather than "continuously improve". Only you are in a position to judge which case applies to your situation.
By the way, a case where neither enums or constant ints are necessarily a good idea is when the IDs represent the identifiers of records in an external store (like a database). It's often risky to hardcode such IDs in the program logic, as these values may actually be different in different environments (eg. Test, Dev, Production, etc). In such cases, loading the values at runtime may be a more appropriate solution.
Your suggested solution looks elegant, but won't work as it stands, as you can't use instances of a static type. It's a bit trickier than that to emulate an enum.
There are a few possible reasons for choosing enum or const-int for the implementation, though I can't think of many strong ones for the actual example you've posted - on the face of it, it seems an ideal candidate for an enum.
A few ideas that spring to mind are:
Enums
They provide type-safety. You can't pass any old number where an enum value is required.
Values can be autogenerated
You can use reflection to easily convert between the 'values' and 'names'
You can easily enumerate the values in an enum in a loop, and then if you add new enum members the loop will automatically take them into account.
You can insert new enunm values without worrying about clashes occurring if you accidentally repeat a value.
const-ints
If you don't understand how to use enums (e.g. not knowing how to change the underlying data type of an enum, or how to set explicit values for enum values, or how to assign the same value to mulitple constants) you might mistakenly believe you're achieving something you can't use an enum for, by using a const.
If you're used to other languages you may just naturally approach the problem with consts, not realising that a better solution exists.
You can derive from classes to extend them, but annoyingly you can't derive a new enum from an existing one (which would be a really useful feature). Potentially you could therefore use a class (but not the one i your example!) to achieve an "extendable enum".
You can pass ints around easily. Using an enum may require you to be constantly casting (e.g.) data you receive from a database to and from the enumerated type. What you lose in type-safety you gain in convenience. At least until you pass the wrong number somewhere... :-)
If you use readonly rather than const, the values are stored in actual memory locations that are read when needed. This allows you to publish constants to another assembly that are read and used at runtime, rather than built into the other assembly, which means that you don't have to recompile the dependant assembly when you change any of the constants in your own assembly. This is an important consideration if you want to be able to patch a large application by just releasing updates for one or two assemblies.
I guess it is a way of making it clearer that the enum values must stay unchanged. With an enum another programmer will just drop in a new value without thinking, but a list of consts makes you stop and think "why is it like this? How do I add a new value safely?". But I'd achieve this by putting explicit values on the enums and adding a clear comment, rather than resorting to consts.
Why should you leave the implementation alone?
The code may well have been written by an idiot who has no good reason for what he did. But changing his code and showing him he's an idiot isn't a smart or helpful move.
There may be a good reason it's like that, and you will break something if you change it (e.g. it may need to be a class due to being accessed through reflection, being exposed through external interfaces, or to stop people easily serializing the values because they'll be broken by the obfuscation system you're using). No end of unnecessary bugs are introduced into systems by people who don't fully understand how something works, especially if they don't know how to test their changes to ensure they haven't broken anything.
The class may be autogenerated by an external tool, so it is the tool you need to fix, not the source code.
There may be a plan to do something more with that class in future (?!)
Even if it's safe to change, you will have to re-test everything that is affected by the change. If the code works as it stands, is the gain worth the pain? When working on legacy systems we will often see existing code of poor quality or just done a way we don't personally like, and we have to accept that it is not cost effective to "fix" it, no matter how much it niggles. Of course, you may also find yourself biting back an "I told you so!" when the const-based implementation fails due to lacking type-safety. But aside from type-safety, the implementation is ultimately no less efficient or effective than an enum.
If it ain't broke, don't fix it.
I don't know the design of the system you're working on, but I suspect that the fields are integers that just happen to have a number of predefined values. That's to say they could, in some future state, contain more than those predefined values. While an enum allows for that scenario (via casting), it implies that only the values the enumeration contains are valid.
Overall, the change is a semantic one but it is unnecessary. Unnecessary changes like this are often a source of bugs, additional test overhead and other headaches with only mild benefits. I say add a comment expressing that this could be an enum and leave it as it is.
Yes, it does help with readability, and no I cannot think of any reason against it.
Using const int is a very common "old school" of programming practice for C++.
The reason I see is that if you want to be loosely coupled with another system that uses the same constants, you avoid being tightly coupled and share the same enum type.
Like in RPC calls or something...

Why is String.Concat not optimized to StringBuilder.Append?

I found concatenations of constant string expressions are optimized by the compiler into one string.
Now with string concatenation of strings only known at run-time, why does the compiler not optimize string concatenation in loops and concatenations of say more than 10 strings to use StringBuilder.Append instead? I mean, it's possible, right? Instantiate a StringBuilder and take each concatenation and turn it into an Append() call.
Is there any reason why this should or could not be optimized? What am I missing?
The definite answer will have to come from the compiler design team. But let me take a stab here...
If your question is, why the compiler doesn't turn this:
string s = "";
for( int i = 0; i < 100; i ++ )
s = string.Concat( s, i.ToString() );
into this:
StringBuilder sb = new StringBuilder();
for( int i = 0; i < 100; i++ )
sb.Append( i.ToString() );
string s = sb.ToString();
The most likely answer is that this is not an optimization. This is a rewrite of the code that introduces new constructs based on knowledge and intent that the developer has - not the compiler.
This type of change would require the compiler to have more knowledge of the BCL than is appropriate. What if tomorrow, some more optimal string assembly service becomes available? Should the compiler use that?
What if your loop conditions were more complicated, should the compiler attempt to perform some static analysis to decide whether the result of such a rewrite would still be functionally equivalent? In many ways, this would be like solving the halting problem.
Finally, I'm not sure that in all cases this would result in faster performing code. There is a cost to instantiating a StringBuilder and resizing its internal buffer as text is appended. In fact, the cost of appending is strongly tied to the size of the string being concatenated, how many there are, what memory pressure looks like. These are things that the compiler cannot predict in advance.
It's your job as a developer to write well-performing code. The compiler can only help by making certain safe, invariant-preserving optimizations. Not rewriting your code for you.
LBuskin's answer is excellent; I have just a couple of things to add.
First, JScript.NET does do this optimization. JScript is frequently used by less-experienced programmers for tasks that involve construction of large strings in loops, like building up JSON objects, HTML data, and so on.
Since those programmers might not be aware of the n-squared cost of naive string allocation, might not be aware of the existence of string builders, and frequently write code using this pattern, we felt that it was reasonable to put this optimization into JScript.NET.
C# programmers tend to be more aware of the underlying costs of the code they write and more aware of the existence of off-the-shelf parts like StringBuilder, so they need this optimization less. And more fundamentally, the design philosophy of C# is that it is a "do what I said" language with a minimum of "magic"; JScript is a "do what I mean" language that does its best to figure out how to best serve you, even if that means sometimes guessing wrong. Both philosophies are valid and useful.
Sometimes it does "go the other way". Compare this choice to the choice we make for switches on strings. Switches on strings are actually compiled as a creation of a dictionary containing the strings, rather than as a series of string comparisons. That optimization could be bad; it might be faster to simply do the string comparisons. But here we make a guess that you "meant" the switch to be a table lookup rather than a series of "if" statements -- if you'd meant the series of if statements, you could easily write that yourself.
For a single concatenation of multiple strings (e.g. a + b + c + d + e + f + g + h + i + j) you really want to be using String.Concat IMO. It has the overhead of building an array for each call, but it has the benefit that the method can work out the exact length of the resulting string before it needs to allocate any memory. StringBuilder.Append(a).Append(b)... only gives a single value at a time, so the builder doesn't know how much memory to allocate.
As for doing it in loops - at that point you've added a new local variable, and you've got to add code to write back to the string variable at exactly the right time (calling StringBuilder.ToString()). What happens when you're running in the debugger? Wouldn't it be pretty confusing not to see the value building up, only becoming visible at the end of the loop? Oh, and of course you've got to perform appropriate validation that the value isn't used at any point before the end of the loop...
Two reasons:
You can't programmatically identify places where it would be strictly higher performing.
The "optimization" will slow things down if performed incorrectly.
You can suggest people use the correct calls for their application, but at some point it's the developer's responsibility to get it right.
Edit: Regarding the cutoff, we have another couple of problems:
The only way to know for sure that the cutoff is reached is complicated flow analysis. The number of places where this would be able to find sections that could be converted is extremely small.
Flow analysis is expensive. If you do it at runtime, the whole program will run slower for the rare chance that one piece of poorly written code will be faster. If you do it at compile time, it's not an error according to language syntax but you can issue a warning - and that's exactly what FXCop does (a slow but available flow analysis tool). Just think if FXCop always had to run with the compiler; so many hours people would be just waiting to run code. And if it was at runtime, well welcome to JVM startup times...
Because it's the compiler's job to generate semantically-correct code. Changing invocations of String.Concat to invocations of StringBuilder.Append would be changing the semantics of the code.
I believe it would be a little too complex for the compiler writers. And when you are referencing the intermediate strings inside the loops besides the concatenation (for example passing them to some other methods or so), this optimization would not be possible.
Probably because it's complicated to match such a pattern in the code, and in case the compiler can't do the match for some reason, the performance of the code is suddenly terrible. Optimising code like that would encourage writing code like that, which would even further increase the negative impact in the cases where the compiler can no longer do the optimisation.
For concatenating a known set of strings, StringBuilder is not faster than String.Concat.
A String is an immutable type, hence using concatenating the string is slower than using StringBuilder.Append.
Edit: To clarify my point a bit more, when you talk about why is String.Concat not optimized to StringBuilder.Append, a StringBuilder class has completely different semantics to the immutable type of String. Why should you expect the compiler to optimize that as they are clearly two different things? Furthermore, a StringBuilder is a mutable type that can change its length dynamically, why should a compiler optimize an immutable type to a mutable type? That is the design and semantics ingrained into the ECMA spec for the .NET Framework, regardless of the language.
It's a bit like asking the compiler (and perhaps expecting too much) to compile a char and optimize it into a int because the int works on 32 bits instead of 8 bits and would be deemed faster!

Why aren't unsigned variables used more often? [duplicate]

This question already has answers here:
Closed 14 years ago.
It seems that unsigned integers would be useful for method parameters and class members that should never be negative, but I don't see many people writing code that way. I tried it myself and found the need to cast from int to uint somewhat annoying...
Anyhow what are you thoughts on this?
Duplicate
Why is Array Length an Int and not an UInt?
The idea, that unsigned will prevent you from problems with methods/members that should not have to deal with negative values is somewhat flawed:
now you have to check for big values ('overflow') in case of error
whereas you could have checked for <=0 with signed
use only one signed int in your methods and you are back to square "signed" :)
Use unsigned when dealing with bits. But don't use bits today anyway, except you have such a lot of them, that they fill some megabytes or at least your small embedded memory.
Using the standard ones probably avoids the casting to unsigned versions. In your code, you can probably maintain the distinction ok, but lots of other inputs and 3rd party libraries wont and thus the casting will just drive people mad!
I can't remember exactly how C# does its implicit conversions, but in C++, widening conversions are done implicitly. Unsigned is considered wider than signed, and so this leads to unexpected problems:
int s = 5;
unsigned int u = 25;
// s > u is false
int s = -1;
unsigned int u = 25;
// s > u is **TRUE**! Error error!
In the example above, s overflowed, so it's value will be something like 4294967295. This has caused me problems before, I often have methods return -1 to say "no match" or something like that, and with the implicit conversion it just fails to do what I think it should.
After a while programmers learnt to almost always use signed variables, except in exceptional cases. Compilers these days also produce warnings for this which is very helpful.
One reason is that public methods or properties referring to unsigned types aren't CLS compliant.
You'll almost always see this attribute applied to .Net assemblies as various wizards include it by default:
[assembly: CLSCompliant(true)]
So basically if your assembly includes the attribute above, and you try to use unsigned types in your public interface with the outside world, you'll get a compilation error.
For simplicity. Modern software involves enough casts and conversions. There's a benefit to stick to as few, commonly available data types as possible to reduce complexity and ambiguity about proper interfaces.
there is no real need. Declaring something as unsigned to say numbers should be positive is a poor mans attempt at validation.
In fact it would be better to just have a single number class that represented all numbers.
To Validate numbers you should use some other technique because generally the constraint isn't just positive numbers, it's a set range. It's usually best to use the most unconstrained method for representing numbers and then if you want to change the rules for allowable values, you change JUST the validation rules NOT the type.
Unsigned data types are carried over from the old days when memomry was a premium. So now we don't really need them for that purpose. Combine that with casting and they are a little cumbersome.
It's not advisable to use unsigned integer because if u assigned negative values to it, all hell break lose. However, if you insists on doing it the right way, try using Spec#, declare it as an integer (where you would have used uint) and attach an invariant to it saying it can never be negative.
You're right in that it probably would be better to use uint for things which should never be negative. In practice, though, there's a few reasons against it:
int is the 'standard' or 'default', it has a lot of inertia behind it.
You have to use annoying/ugly explicit casts everywhere to get back to int, which you're probably going to need to do a lot due to the first point after all.
When you do cast back to int, what happens if your uint value is going to overflow it?
A lot of the time, people use ints for things that are never negative, and then use -1 or -99 or something like that for error codes or uninitialised values. It's a little lazy perhaps, but again this is something that uint is less flexible with.
The biggest reason is that people are usually too lazy, or not thinking closely enough, to know where they're appropriate. Something like a size_t can never be negative, so unsigned is correct.
Casting from signed to unsigned can be fraught with peril though, because of peculiarities in how sign bits are handled by the underlying architecture.
You shouldn't need to manually cast it, I don't think.
Anyway, it's because for most applications, it doesn't matter - the range for either is big enough for most uses.
It is because they are not CLS compliant, which means your code might not run as you expect on other .NET framework implementations or even on other .NET languages (unless supported).
Also it is not interoperable with other systems if you try to pass them. Like a web service to be consumed by Java, or call to Win32 API
See that SO post on the reason why as well

What is the correct naming notation for classes, functions, variables etc in c#?

I'm a web developer with no formal computing background behind me, I've been writing code now some years now, but every time I need to create a new class / function / variable, I spend about two minutes just deciding on a name and then how to type it.
For instance, if I write a function to sum up a bunch of numbers. Should I call it
Sum()
GetSum()
getSum()
get_sum()
AddNumbersReturnTotal()
I know there is a right way to do this, and a link to a good definitive source is all I ask :D
Closed as a duplicate of c# Coding standard / Best practices
You're looking for StyleCop.
Classes should be in camel notation with the first letter capitalized
public class MyClass
Functions and Methods in C# should act in a similar fashion except for private methods
public void MyMethod()
private void myPrivateMethod()
Variables I tend to do a little differently:
Member Variables
private int _count;
Local variables
int count;
I agree on the calculate vs get distinction: get() should be used for values that are already calculated or otherwise trivial to retrieve.
Additionally, I would suggest in many cases adding a noun to the name, so that it's obvious exactly what sum you are calculating. Unless, of course, the noun you would add is the class name or type itself.
All of the above.
I believe the official C# guidelines would say call it calculateSum() as getSum() would be used if the sum was an instance variable. But it depends on the coding style used and how any existing code in the project is written.
Luckily enough I don't believe there is a standardized way this is done. I pick the one that I like, which consequently also seems to be the standard all other source code I've seen uses, and run with it.
Sum() if it's public and does the work itself.
GetSum() if it's public and it retrieves the information from somewhere else.
sum() / getSum() as above, but for internal/private methods.
(Hmm... That's a bit vague, since you shift the meaning of "Sum" there slightly. So, let try this again.
XXX if xxx is a process (summing values).
GetXXX if xxx is a thing. (the sum of the values)
Method names are verbs. Class, field and property names are nouns. In this case, Sum could pass as either a verb or a noun...
AddNumbersReturnTotal fits the above definition, but it's a little long. Out of kindness to the guy who gets to maintain my code (usually me!) I try and avoid including redundant words in identifiers, and I try to avoid words that are easy to make typos on.

Categories