Given a string : "5.2m*5.7m"
and return type being Decimal;
and calling
`System.Linq.Dynamic.DynamicExpression.Parse(returnType, expression);`
will give a syntax error regarding the character on position which is 'm';
After a bit of testing the same applies to 'd';
To give a bit of context,the reason to use m is to avoid another error which is multiplication of double*decimal as it will interpret a floating point value as double by default.
My question is: Why does this happen? and what would be the best scenario of solving the double*decimal problem? cast by means of (decimal)5.7 the value I know to be decimal or use Convert.ToDecimal(5.7)?How much really does parse() know?(I didn't find documentation on msdn or the like)
My question is: Why does this happen?
This happens because DynamicExpression uses a custom-built expression parser. It is made to resemble C# but it is not C#. Not everything that is valid in C# will work, and some things that are valid in C# work differently.
and what would be the best scenario of solving the double*decimal problem? cast by means of (decimal)5.7 the value I know to be decimal or use Convert.ToDecimal(5.7)?
Cast, but not using this syntax. The syntax to use is type(expr), not (type)expr, see below.
How much really does parse() know?(I didn't find documentation on msdn or the like)
A copy of the original documentation appears to be available at http://ak-dynamic-linq.azurewebsites.net/GettingStarted. I have not verified that the whole document is unmodified, but I have compared the below to the original documentation.
To quote:
The expression language permits explicit conversions using the syntax type(expr) or type"string", where type is a type name optionally followed by ? and expr is an expression or string is a string literal. This syntax may be used to perform the following conversions:
Between two types provided Type.IsAssignableFrom is true in one or both directions.
Between two types provided one or both are interface types.
Between the nullable and non-nullable forms of any value type.
Between string and any type that have static TryParse method.
Between any two types belonging to the set consisting of SByte, Byte, Int16, UInt16, Int32, UInt32, Int64, UInt64, Decimal, Single, Double, Char, any enum type, as well as the nullable forms of those types.
Related
Looking at the C# 6.0 Draft specification I saw nothing about primitive types; I only saw data about Simple types. That said, the Type class has an IsPrimitive method.
Should IsPrimitve really be IsSimple?
The C# "simple types" are (alphabetically) bool, byte, char, decimal, double, float, int, long, sbyte, short, uint, ulong andushort . These are a set of struct types that C# has chosen to give special status, with special provisions other types don't get (as detailed in the standard), such as the ability to use them as constants.
Type.IsPrimitive is a different beast; it returns true for a limited set of value types (which C# formally calls "struct types", but very commonly called "value types" by C# developers anyway) that the runtime considers special in some way. These types are Boolean, Byte, Char, Double, Int16, Int32, Int64, IntPtr, SByte, Single, UInt16, UInt32, UInt64 and UIntPtr (all living in System). These types all have in common that they are directly supported by the runtime as built-in types, so they have operations that are directly implemented by the JIT compiler rather than as compiled IL. (There is one more value type that meets these criteria which is not on this list of types, for some reason: TypedReference. This is rarely used in managed languages and detailing its purpose and use is something for another answer.)
The most striking difference between these lists is that C#'s simple type decimal is not a primitive type. This has some consequences: C# allows decimal constants, but the runtime does not -- they are really compiled as static readonly fields with some attribute magic, as detailed by Jon Skeet here. The designers of C# considered decimal important enough to label it a simple type, but it's not a built-in type, so the compiler has to make up for the difference.
The other important difference is that IntPtr and UIntPtr are built-in types, but C# does not consider them "simple", presumably since you're not really supposed to make much use of them in managed code outside interop scenarios, and also because they have restrictions that would not be shared by other simple types (you cannot declare IntPtr constants, not even on the IL level, because the actual size differs by platform).
So the short answer is: no, Type.IsPrimitive should not be named Type.IsSimple, although "primitive type" does not really have a single definition that I can see, beyond the raw listing of the types. "Built-in value type" does have a definition, which is almost but not entirely the same as what Type.IsPrimitive calls "primitive".
Why is decimal not a primitive type?
Console.WriteLine(typeof(decimal).IsPrimitive);
outputs false.
It is a base type, it's part of the specifications of the language, but not a primitive. What primitive type(s) do represent a decimal in the framework? An int for example has a field m_value of type int. A double has a field m_value of type double. It's not the case for decimal. It seems to be represented by a bunch of ints but I'm not sure.
Why does it look like a primitive type, behaves like a primitive type (except in a couple of cases) but is not a primitive type?
Although not a direct answer, the documentation for IsPrimitive lists what the primitive types are:
http://msdn.microsoft.com/en-us/library/system.type.isprimitive.aspx
A similar question was asked here:
http://bytes.com/topic/c-sharp/answers/233001-typeof-decimal-isprimitive-false-bug-feature
Answer quoted from Jon Skeet:
The CLR doesn't need to have any intrinsic knowledge about the decimal
type - it treats it just as another value type which happens to have
overloaded operators. There are no IL instructions to operate directly
on decimals, for instance.
To me, it seems as though decimal is a type that must exist for a language/runtime wanting to be CLS/CLI-compliant (and is hence termed "primitive" because it is a base type with keyword support), but the actual implementation does not require it to be truly "primitive" (as in the CLR doesn't think it is a primitive data type).
Decimal is a 128 bit data type, which can not be represented natively on a computer hardware. For example a 64-bit computer architecture generally has integer and addressing registers that are 64 bits wide, allowing direct support for 64-bit data types and addresses.
Wikipedia says that
Depending on the language and its implementation, primitive data types
may or may not have a one-to-one correspondence with objects in the
computer's memory. However, one usually expects operations on basic
primitive data types to be the fastest language constructs there are.
In case of decimal it is just a composite datatype which utilizes integers internally, so its performance is slower than of datatypes that have a direct correlation to computer memory (ints, doubles etc).
Consider the below example ,
int i = 5;
float f = 1.3f;
decimal d = 10;
If you put a debugger and verify the native instruction set,it would be
As you can see int,float all being a primitive type takes single instruction to perform the assignemnt operation whereas decimal,string being non primitive types takes more than one native instruction to perform this operation.
I have been trying to understand the use of "primitives" in Java and C# and the difference between them (if any). I have asked a series of questions on SO and some of the answers seem to confuse the issue rather than clarify it. Some answers (and some MS documentation) appear to provide contradictory statements. From SO
What are first-class objects in Java and C#?
Are primitive types different in Java and C#?
and from MS: http://msdn.microsoft.com/en-us/library/ms228360%28VS.80,lightweight%29.aspx
- "structs are very similar to classes"
- "the Int32 class wraps the int data type"
- "On the other hand, all primitive data types in C# are objects in the System namespace. For each data type, a short name, or alias, is provided. For instance, int is the short name for System.Int32".
My confusion lies largely with C# (I have programmed java for some while).
EDIT The following paragraph has been confirmed to be correct by #Jon Skeet
Java has two types (primitive and class). The words "value type" could be a synonym for primitive (although not widely used) and "reference type" for class. Java "wraps" primitives (int) in classes (Integer) and these classes have the complete power of any other class (can be null, used in collections, etc.)
EDIT #Jon has given a very clear statement on C# so I will delete my suggested truths and refer to his answer.
Further question: Is there a consensus on what the actual use of these terms should be? If there is a consensus I'd be very grateful for it spelt out explicitly. Otherwise I assume the terminology is muddled and therefore of limited use.
SUMMARY Thanks for very clear answers. There is a consensus (see accepted answer from #Jon) among those who really understand this and the MS docs are consistent (although they refer to Java in places and I misread these for C#)
The first bullet point is correct.
The second is not: the primitive types in .NET are Boolean, Byte, SByte, Int16, UInt16, Int32, UInt32, Int64, UInt64, IntPtr, UIntPtr, Char, Double, and Single. A struct cannot usually be set to null, but there are also nullable value types (Nullable<T>). These are still value types, but there's syntactic sugar in C# to equate "null" with "the null value for the type", which for a nullable value type is an instance where HasValue returns false.
int and System.Int32 are exact synonyms in C#. The former is just an alias for the latter. They compile to exactly the same code.
In C#, classes and interfaces are always reference types. Structs and enums are value types - but there is a boxed equivalent of every struct (other than Nullable<T> which is handled differently by the CLR in terms of boxing). The boxed types don't have separate names, and can't be referred to explicitly in C# (although they can be in C++/CLI). There is no separate wrapper class equivalent like java.lang.Integer in .NET; it would be problematic to introduce such classes as you can create your own value types in .NET, unlike in Java.
For more information on reference types and value types, see my article about it.
I haven't seen MS docs to be contradictory about this (MSDN is sometimes wrong, but on this specific issue, I've always seen it correct). The MSDN link you posted says:
For each primitive data type in Java,
the core class library provides a
wrapper class that represents it as a
Java object. For example, the Int32
class wraps the int data type, and the
Double class wraps the double data
type.
On the other hand, all primitive data
types in C# are objects in the System
namespace. For each data type, a short
name, or alias, is provided. For
instance, int is the short name for
System.Int32 and double is the short
form of System.Double.
Basically, it's saying the right thing. In Java, Integer class is a wrapper for int primitive type. In C#, int is an alias for System.Int32 structure. The first paragraph is about Java and doesn't apply to C#.
In .NET, the terminology is as follows:
A primitive type is a type that has IsPrimitive property set to true. Primitive types are:
The primitive types are Boolean, Byte,
SByte, Int16, UInt16, Int32, UInt32,
Int64, UInt64, IntPtr, UIntPtr, Char,
Double, and Single.
All primitive types are value types but not vice versa.
A value type is a type that has value semantics, as opposed to reference semantics. The whole value is copied when passed by value (not its reference). Local variables of value types are stored on stack. structs and enums are value types.
As mentioned above, all primitive types are value types. They are structs in the System namespace. In C#, int, double, etc., keywords are basically aliases for those structs.
I heard someone say that in C#, capital Decimal is using more memory than lower case decimal, because Decimal is resolved to the lowercase decimal and that requires memory.
Is that true?
No.
decimal is simply an alias for System.Decimal. They're exactly the same and the alias is resolved at compile-time.
No, that is not true.
The decimal keyword is an alias for the type System.Decimal. They are the exact same type, so there is no memory difference and no performance difference. If you use reflection to look at the compiled code, it's not even possible to tell if the alias or the system type was used in the source code.
There is two differences in where you can use the alias and the system type, though:
The decimal alias is always the system type and can not be changed in any way. The use of the Decimal identifier relies on importing the System namespace. The unambiguous name for the system type is global::System.Decimal.
Some language constructs only accept the alias, not the type. I can't think of an example for decimal, but when specifying the underlying type for an enum you can only use language aliases like int, not the corresponing system type like System.Int32.
No. That's just silly.
In C#, decimal is just a synonym for Decimal. The compiler will treat decimal declarations as Decimal, and the compiled code will be as if Decimal was used.
I've been trying to use decimal values as params for a field attribute but I get a compiler error.
I found this blog post link saying it wasn't possible in .NET to use then, does anybody know why they choose this or how can I use decimal params?
This is a CLR restriction. Only primitive constants or arrays of primitives can be used as attribute parameters. The reason why is that an attribute must be encoded entirely in metadata. This is different than a method body which is coded in IL. Using MetaData only severely restricts the scope of values that can be used. In the current version of the CLR, metadata values are limited to primitives, null, types and arrays of primitives (may have missed a minor one).
Decimals while a basic type are not a primitive type and hence cannot be represented in metadata which prevents it from being an attribute parameter.
I have the same problem. I consider to use strings. This is not type-safe, but it's readable and I think we will be able to write valid numbers in strings :-).
class BlahAttribute : Attribute
{
private decimal value;
BlahAttribute(string number)
{
value = decimal.Parse(number, CultureInfo.InvariantCulture);
}
}
[Blah("10.23")]
class Foo {}
It's not a beauty, but after considering all the options, it's good enough.
When I have run into this situation, I ended up exposing the properties on the attribute as a Double, but inside the attribute treated them like Decimal. Far from perfect, but for the simple cases, it just might be what you need.
For realtime tricks with attributes i am using TypeConverter class.
You can use the following constructor. When you have a decimal literal in C# code, the C# compiler emits a call to this constructor.
Decimal(Int32, Int32, Int32, Boolean, Byte)
Edit: I know this is not convenient.