Does system.Decimal use more memory than 'decimal'? - c#

I heard someone say that in C#, capital Decimal is using more memory than lower case decimal, because Decimal is resolved to the lowercase decimal and that requires memory.
Is that true?

No.
decimal is simply an alias for System.Decimal. They're exactly the same and the alias is resolved at compile-time.

No, that is not true.
The decimal keyword is an alias for the type System.Decimal. They are the exact same type, so there is no memory difference and no performance difference. If you use reflection to look at the compiled code, it's not even possible to tell if the alias or the system type was used in the source code.
There is two differences in where you can use the alias and the system type, though:
The decimal alias is always the system type and can not be changed in any way. The use of the Decimal identifier relies on importing the System namespace. The unambiguous name for the system type is global::System.Decimal.
Some language constructs only accept the alias, not the type. I can't think of an example for decimal, but when specifying the underlying type for an enum you can only use language aliases like int, not the corresponing system type like System.Int32.

No. That's just silly.
In C#, decimal is just a synonym for Decimal. The compiler will treat decimal declarations as Decimal, and the compiled code will be as if Decimal was used.

Related

Why does the Type class have a method called IsPrimitive() if the C# spec refers to them as simple types?

Looking at the C# 6.0 Draft specification I saw nothing about primitive types; I only saw data about Simple types. That said, the Type class has an IsPrimitive method.
Should IsPrimitve really be IsSimple?
The C# "simple types" are (alphabetically) bool, byte, char, decimal, double, float, int, long, sbyte, short, uint, ulong andushort . These are a set of struct types that C# has chosen to give special status, with special provisions other types don't get (as detailed in the standard), such as the ability to use them as constants.
Type.IsPrimitive is a different beast; it returns true for a limited set of value types (which C# formally calls "struct types", but very commonly called "value types" by C# developers anyway) that the runtime considers special in some way. These types are Boolean, Byte, Char, Double, Int16, Int32, Int64, IntPtr, SByte, Single, UInt16, UInt32, UInt64 and UIntPtr (all living in System). These types all have in common that they are directly supported by the runtime as built-in types, so they have operations that are directly implemented by the JIT compiler rather than as compiled IL. (There is one more value type that meets these criteria which is not on this list of types, for some reason: TypedReference. This is rarely used in managed languages and detailing its purpose and use is something for another answer.)
The most striking difference between these lists is that C#'s simple type decimal is not a primitive type. This has some consequences: C# allows decimal constants, but the runtime does not -- they are really compiled as static readonly fields with some attribute magic, as detailed by Jon Skeet here. The designers of C# considered decimal important enough to label it a simple type, but it's not a built-in type, so the compiler has to make up for the difference.
The other important difference is that IntPtr and UIntPtr are built-in types, but C# does not consider them "simple", presumably since you're not really supposed to make much use of them in managed code outside interop scenarios, and also because they have restrictions that would not be shared by other simple types (you cannot declare IntPtr constants, not even on the IL level, because the actual size differs by platform).
So the short answer is: no, Type.IsPrimitive should not be named Type.IsSimple, although "primitive type" does not really have a single definition that I can see, beyond the raw listing of the types. "Built-in value type" does have a definition, which is almost but not entirely the same as what Type.IsPrimitive calls "primitive".

To what extent does DynamicExpression.Parse recognise C# symbols?

Given a string : "5.2m*5.7m"
and return type being Decimal;
and calling
`System.Linq.Dynamic.DynamicExpression.Parse(returnType, expression);`
will give a syntax error regarding the character on position which is 'm';
After a bit of testing the same applies to 'd';
To give a bit of context,the reason to use m is to avoid another error which is multiplication of double*decimal as it will interpret a floating point value as double by default.
My question is: Why does this happen? and what would be the best scenario of solving the double*decimal problem? cast by means of (decimal)5.7 the value I know to be decimal or use Convert.ToDecimal(5.7)?How much really does parse() know?(I didn't find documentation on msdn or the like)
My question is: Why does this happen?
This happens because DynamicExpression uses a custom-built expression parser. It is made to resemble C# but it is not C#. Not everything that is valid in C# will work, and some things that are valid in C# work differently.
and what would be the best scenario of solving the double*decimal problem? cast by means of (decimal)5.7 the value I know to be decimal or use Convert.ToDecimal(5.7)?
Cast, but not using this syntax. The syntax to use is type(expr), not (type)expr, see below.
How much really does parse() know?(I didn't find documentation on msdn or the like)
A copy of the original documentation appears to be available at http://ak-dynamic-linq.azurewebsites.net/GettingStarted. I have not verified that the whole document is unmodified, but I have compared the below to the original documentation.
To quote:
The expression language permits explicit conversions using the syntax type(expr) or type"string", where type is a type name optionally followed by ? and expr is an expression or string is a string literal. This syntax may be used to perform the following conversions:
Between two types provided Type.IsAssignableFrom is true in one or both directions.
Between two types provided one or both are interface types.
Between the nullable and non-nullable forms of any value type.
Between string and any type that have static TryParse method.
Between any two types belonging to the set consisting of SByte, Byte, Int16, UInt16, Int32, UInt32, Int64, UInt64, Decimal, Single, Double, Char, any enum type, as well as the nullable forms of those types.

When and why to use aliased name or its class name in c#?

Why and when should we use int, int16, int32, int64, double, string instead of Int, Int16, Int32, Int64, Double, String respectively? I have ready many articles about this but still not getting a proper solution.
This is purely a matter of preference/tradition.
C# language specification states:
Each of the predefined types is shorthand for a system-provided type. For example, the keyword int refers to the struct System.Int32. As a matter of style, use of the keyword is favoured over use of the complete system type name.
It doesn't matter at all in implementation. I personally prefer using the aliases, and from what I've seen of other code that's the more common choice. There are some who suggest using the BCL names (Int32 etc) everywhere though - including Jeffrey Richter, author of CLR via C# (where he gives that advice).
However, when you're naming methods and types, you should use the BCL name rather than the C#-specific name. That way your code is equally idiomatic to people using your code from other languages. This is the convention adopted within the BCL itself. For example, we have Convert.ToInt32 and Convert.ToSingle instead of Convert.ToInt and Convert.ToFloat.
While it doesn't matter in theory if those are internal or private members, I'd suggest getting in the habit of giving your members the names you'd want to expose publicly - it means you can be consistent there regardless of access, and you're less likely to accidentally let one slip through.
It truly does not matter, just be consistent. I personally use the original, lowercase names.

Why is decimal not a primitive type?

Why is decimal not a primitive type?
Console.WriteLine(typeof(decimal).IsPrimitive);
outputs false.
It is a base type, it's part of the specifications of the language, but not a primitive. What primitive type(s) do represent a decimal in the framework? An int for example has a field m_value of type int. A double has a field m_value of type double. It's not the case for decimal. It seems to be represented by a bunch of ints but I'm not sure.
Why does it look like a primitive type, behaves like a primitive type (except in a couple of cases) but is not a primitive type?
Although not a direct answer, the documentation for IsPrimitive lists what the primitive types are:
http://msdn.microsoft.com/en-us/library/system.type.isprimitive.aspx
A similar question was asked here:
http://bytes.com/topic/c-sharp/answers/233001-typeof-decimal-isprimitive-false-bug-feature
Answer quoted from Jon Skeet:
The CLR doesn't need to have any intrinsic knowledge about the decimal
type - it treats it just as another value type which happens to have
overloaded operators. There are no IL instructions to operate directly
on decimals, for instance.
To me, it seems as though decimal is a type that must exist for a language/runtime wanting to be CLS/CLI-compliant (and is hence termed "primitive" because it is a base type with keyword support), but the actual implementation does not require it to be truly "primitive" (as in the CLR doesn't think it is a primitive data type).
Decimal is a 128 bit data type, which can not be represented natively on a computer hardware. For example a 64-bit computer architecture generally has integer and addressing registers that are 64 bits wide, allowing direct support for 64-bit data types and addresses.
Wikipedia says that
Depending on the language and its implementation, primitive data types
may or may not have a one-to-one correspondence with objects in the
computer's memory. However, one usually expects operations on basic
primitive data types to be the fastest language constructs there are.
In case of decimal it is just a composite datatype which utilizes integers internally, so its performance is slower than of datatypes that have a direct correlation to computer memory (ints, doubles etc).
Consider the below example ,
int i = 5;
float f = 1.3f;
decimal d = 10;
If you put a debugger and verify the native instruction set,it would be
As you can see int,float all being a primitive type takes single instruction to perform the assignemnt operation whereas decimal,string being non primitive types takes more than one native instruction to perform this operation.

Best practice of use member of Primitive types

How I should write: long.MaxValue or Int64.MaxValue?
Are there standard from Microsoft?
long.MaxValue
because you are coding in c#.
Int64.MaxValue is a .NET type.
Default StyleCop settings will tell you to use long.MaxValue.
This is subjective, but I find Int64.MaxValue easier to read, because the capital letter makes it stand out clearly as a static member of a type called Int64, rather than an instance member of a local variable called long (not that there could ever be a local variable called long of course). But I don't think anyone reading your code is going to be confused whichever way you do it!
I would match it to the type of the variable it is being assigned to. If you are using long in the variable declaration, use long.MaxValue.
Jeffrey Richter writes:
I prefer to use the FCL type names and completely avoid the primitive type names.
...
In C#, long maps to System.Int64, but in a different programming language, long could map to an Int16 or Int32. In fact, C++/CLI does in fact treat long as an Int32. Someone reading source code in one language could easily misinterpret the code's intention if he or she were used to programming in a different programming language. In fact, most languages won't even treat long as a keyword and won't compile code that uses it.
I prefer the Int64.MaxValue.
Also think about this case:
float val = Single.Parse(..);
Single val = Single.Parse(..);
I do believe that the 2nd (FCL type name) is more clear than the 1st (built-in type name).

Categories