//Main method
Belle jhbh = new();
IOpera kujb = (IOpera)jhbh;
interface IOpera {
}
interface IFire:IOpera {
}
class Belle :fff,IFire {
}
Can you please tell me, how does the casting work?
To me the following two variants would do the same thing:
1. IOpera kujb = (IOpera)jhbh;
2. IOpera kujb = jhbh;
I mean, at the end of the day It is kujb that decides what member is accessible and what is not, right.
and since, In both cases, kujb points to the same object, what is the point of casting then?
I think I am getting something horribly wrong.
class Belle and interface IOpera are:
two different types
the point of the interface is that it guarantees that what ever implements it will provide the methods declared within it.
what is the point of casting then?
As these are two different types.
IOpera kujb = (IOpera)jhbh;
As msdn says about this:
These kinds of operations are called type conversions. In C#, you can
perform the following kinds of conversions:
Implicit conversions: No special syntax is required because the
conversion always succeeds and no data will be lost. Examples include
conversions from smaller to larger integral types, and conversions
from derived classes to base classes.
Explicit conversions (casts): Explicit conversions require a cast
expression. Casting is required when information might be lost in the
conversion, or when the conversion might not succeed for other
reasons. Typical examples include numeric conversion to a type that
has less precision or a smaller range, and conversion of a base-class
instance to a derived class.
User-defined conversions: User-defined conversions are performed by
special methods that you can define to enable explicit and implicit
conversions between custom types that do not have a base class–derived
class relationship. For more information, see User-defined conversion
operators.
Conversions with helper classes: To convert between non-compatible
types, such as integers and System.DateTime objects, or hexadecimal
strings and byte arrays, you can use the System.BitConverter class,
the System.Convert class, and the Parse methods of the built-in
numeric types, such as Int32.Parse. For more information, see How to
convert a byte array to an int, How to convert a string to a number, and How to convert between hexadecimal strings and numeric > types.
UPDATE:
Upcasting can be done without explicit cast. So this is an example of implicit cast:
Engineer engineer = new Engineer();
IEmployee employee = engineer;
enum Season { spring, summer, fall, winter }
int a = (int)Season.spring;
Is this a unboxing or just a normal casting?
If this is a unboxing, could you explain why?
I thought this is just a normal casting because 'enum' and 'int' is both a value-type data.
As documented in ECMA-334 11.3.3, C# defines a conversion to and from an enum type and its underlying type:
However, this does not specify whether or not it is an unboxing conversion. That could be deduced, though, from the fact that both the enum and the integer are value-types, therefore no unboxing is involved.
ECMA-335, which defines the CLR, makes it more clear that an enum is not just convertible to an integer, it actually is the integer, there is no conversion done at all:
14.3 Enums
For binding purposes (e.g., for locating a method definition from the method reference used to call it) enums shall be distinct from their underlying type. For all other purposes, including verification and execution of code, an unboxed enum freely interconverts with its underlying type. Enums can be boxed (§13) to a corresponding boxed instance type, but this type is not the same as the boxed type of the underlying type, so
boxing does not lose the original type of the enum.
Given a string : "5.2m*5.7m"
and return type being Decimal;
and calling
`System.Linq.Dynamic.DynamicExpression.Parse(returnType, expression);`
will give a syntax error regarding the character on position which is 'm';
After a bit of testing the same applies to 'd';
To give a bit of context,the reason to use m is to avoid another error which is multiplication of double*decimal as it will interpret a floating point value as double by default.
My question is: Why does this happen? and what would be the best scenario of solving the double*decimal problem? cast by means of (decimal)5.7 the value I know to be decimal or use Convert.ToDecimal(5.7)?How much really does parse() know?(I didn't find documentation on msdn or the like)
My question is: Why does this happen?
This happens because DynamicExpression uses a custom-built expression parser. It is made to resemble C# but it is not C#. Not everything that is valid in C# will work, and some things that are valid in C# work differently.
and what would be the best scenario of solving the double*decimal problem? cast by means of (decimal)5.7 the value I know to be decimal or use Convert.ToDecimal(5.7)?
Cast, but not using this syntax. The syntax to use is type(expr), not (type)expr, see below.
How much really does parse() know?(I didn't find documentation on msdn or the like)
A copy of the original documentation appears to be available at http://ak-dynamic-linq.azurewebsites.net/GettingStarted. I have not verified that the whole document is unmodified, but I have compared the below to the original documentation.
To quote:
The expression language permits explicit conversions using the syntax type(expr) or type"string", where type is a type name optionally followed by ? and expr is an expression or string is a string literal. This syntax may be used to perform the following conversions:
Between two types provided Type.IsAssignableFrom is true in one or both directions.
Between two types provided one or both are interface types.
Between the nullable and non-nullable forms of any value type.
Between string and any type that have static TryParse method.
Between any two types belonging to the set consisting of SByte, Byte, Int16, UInt16, Int32, UInt32, Int64, UInt64, Decimal, Single, Double, Char, any enum type, as well as the nullable forms of those types.
Why is decimal not a primitive type?
Console.WriteLine(typeof(decimal).IsPrimitive);
outputs false.
It is a base type, it's part of the specifications of the language, but not a primitive. What primitive type(s) do represent a decimal in the framework? An int for example has a field m_value of type int. A double has a field m_value of type double. It's not the case for decimal. It seems to be represented by a bunch of ints but I'm not sure.
Why does it look like a primitive type, behaves like a primitive type (except in a couple of cases) but is not a primitive type?
Although not a direct answer, the documentation for IsPrimitive lists what the primitive types are:
http://msdn.microsoft.com/en-us/library/system.type.isprimitive.aspx
A similar question was asked here:
http://bytes.com/topic/c-sharp/answers/233001-typeof-decimal-isprimitive-false-bug-feature
Answer quoted from Jon Skeet:
The CLR doesn't need to have any intrinsic knowledge about the decimal
type - it treats it just as another value type which happens to have
overloaded operators. There are no IL instructions to operate directly
on decimals, for instance.
To me, it seems as though decimal is a type that must exist for a language/runtime wanting to be CLS/CLI-compliant (and is hence termed "primitive" because it is a base type with keyword support), but the actual implementation does not require it to be truly "primitive" (as in the CLR doesn't think it is a primitive data type).
Decimal is a 128 bit data type, which can not be represented natively on a computer hardware. For example a 64-bit computer architecture generally has integer and addressing registers that are 64 bits wide, allowing direct support for 64-bit data types and addresses.
Wikipedia says that
Depending on the language and its implementation, primitive data types
may or may not have a one-to-one correspondence with objects in the
computer's memory. However, one usually expects operations on basic
primitive data types to be the fastest language constructs there are.
In case of decimal it is just a composite datatype which utilizes integers internally, so its performance is slower than of datatypes that have a direct correlation to computer memory (ints, doubles etc).
Consider the below example ,
int i = 5;
float f = 1.3f;
decimal d = 10;
If you put a debugger and verify the native instruction set,it would be
As you can see int,float all being a primitive type takes single instruction to perform the assignemnt operation whereas decimal,string being non primitive types takes more than one native instruction to perform this operation.
In Jesse Liberty's Learning C# book, he says "Objects of one type can be converted into objects of another type. This is called casting."
If you investigate the IL generated from the code below, you can clearly see that the casted assignment isn't doing the same thing as the converted assignment. In the former, you can see the boxing/unboxing occurring; in the latter you can see a call to a convert method.
I know in the end it may be just a silly semantic difference--but is casting just another word for converting. I don't mean to be snarky, but I'm not interested in anyone's gut feeling on this--opinions don't count here! Can anyone point to a definitive reference that confirms or denies if casting and converting are the same thing?
object x;
int y;
x = 4;
y = ( int )x;
y = Convert.ToInt32( x );
Thank you
rp
Note added after Matt's comment about explicit/implicit:
I don't think implicit/explicit is the difference. In the code I posted, the change is explicit in both cases. An implicit conversion is what occurs when you assign a short to an int.
Note to Sklivvz:
I wanted confirmation that my suspicion of the looseness of Jesse Liberty's (otherwise usually lucid and clear) language was correct. I thought that Jesse Liberty was being a little loose with his language. I understand that casting is routed in object hierarchy--i.e., you can't cast from an integer to a string but you could cast from custom exception derived from System.Exception to a System.Exception.
It's interesting, though, that when you do try to cast from an int to a string the compiler tells you that it couldn't "convert" the value. Maybe Jesse is more correct than I thought!
Absolutely not!
Convert tries to get you an Int32 via "any means possible". Cast does nothing of the sort. With cast you are telling the compiler to treat the object as Int, without conversion.
You should always use cast when you know (by design) that the object is an Int32 or another class that has an casting operator to Int32 (like float, for example).
Convert should be used with String, or with other classes.
Try this
static void Main(string[] args)
{
long l = long.MaxValue;
Console.WriteLine(l);
byte b = (byte) l;
Console.WriteLine(b);
b = Convert.ToByte(l);
Console.WriteLine(b);
}
Result:
9223372036854775807
255
Unhandled Exception:
System.OverflowException: Value is
greater than Byte.MaxValue or less
than Byte.MinValue at
System.Convert.ToByte (Int64 value)
[0x00000] at Test.Main
(System.String[] args) [0x00019] in
/home/marco/develop/test/Exceptions.cs:15
The simple answer is: it depends.
For value types, casting will involve genuinely converting it to a different type. For instance:
float f = 1.5f;
int i = (int) f; // Conversion
When the casting expression unboxes, the result (assuming it works) is usually just a copy of what was in the box, with the same type. There are exceptions, however - you can unbox from a boxed int to an enum (with an underlying type of int) and vice versa; likewise you can unbox from a boxed int to a Nullable<int>.
When the casting expression is from one reference type to another and no user-defined conversion is involved, there's no conversion as far as the object itself is concerned - only the type of the reference "changes" - and that's really only the way that the value is regarded, rather than the reference itself (which will be the same bits as before). For example:
object o = "hello";
string x = (string) o; // No data is "converted"; x and o refer to the same object
When user-defined conversions get involved, this usually entails returning a different object/value. For example, you could define a conversion to string for your own type - and
this would certainly not be the same data as your own object. (It might be an existing string referred to from your object already, of course.) In my experience user-defined conversions usually exist between value types rather than reference types, so this is rarely an issue.
All of these count as conversions in terms of the specification - but they don't all count as converting an object into an object of a different type. I suspect this is a case of Jesse Liberty being loose with terminology - I've noticed that in Programming C# 3.0, which I've just been reading.
Does that cover everything?
The best explanation that I've seen can be seen below, followed by a link to the source:
"... The truth is a bit more complex than that. .NET provides
three methods of getting from point A to point B, as it were.
First, there is the implicit cast. This is the cast that doesn't
require you to do anything more than an assignment:
int i = 5;
double d = i;
These are also called "widening conversions" and .NET allows you to
perform them without any cast operator because you could never lose any
information doing it: the possible range of valid values of a double
encompasses the range of valid values for an int and then some, so
you're never going to do this assignment and then discover to your
horror that the runtime dropped a few digits off your int value. For
reference types, the rule behind an implicit cast is that the cast
could never throw an InvalidCastException: it is clear to the compiler
that the cast is always valid.
You can make new implicit cast operators for your own types (which
means that you can make implicit casts that break all of the rules, if
you're stupid about it). The basic rule of thumb is that an implicit
cast can never include the possibility of losing information in the
transition.
Note that the underlying representation did change in this
conversion: a double is represented completely differently from an int.
The second kind of conversion is an explicit cast. An explicit cast is
required wherever there is the possibility of losing information, or
there is a possibility that the cast might not be valid and thus throw
an InvalidCastException:
double d = 1.5;
int i = (int)d;
Here you are obviously going to lose information: i will be 1 after the
cast, so the 0.5 gets lost. This is also known as a "narrowing"
conversion, and the compiler requires that you include an explicit cast
(int) to indicate that yes, you know that information may be lost, but
you don't care.
Similarly, with reference types the compiler requires explicit casts in
situations in which the cast may not be valid at run time, as a signal
that yes, you know there's a risk, but you know what you're doing.
The third kind of conversion is one that involves such a radical change
in representation that the designers didn't provide even an explicit
cast: they make you call a method in order to do the conversion:
string s = "15";
int i = Convert.ToInt32(s);
Note that there is nothing that absolutely requires a method call here.
Implicit and explicit casts are method calls too (that's how you make
your own). The designers could quite easily have created an explicit
cast operator that converted a string to an int. The requirement that
you call a method is a stylistic choice rather than a fundamental
requirement of the language.
The stylistic reasoning goes something like this: String-to-int is a
complicated conversion with lots of opportunity for things going
horribly wrong:
string s = "The quick brown fox";
int i = Convert.ToInt32(s);
As such, the method call gives you documentation to read, and a broad
hint that this is something more than just a quick cast.
When designing your own types (particularly your own value types), you
may decide to create cast operators and conversion functions. The lines
dividing "implicit cast", "explicit cast", and "conversion function"
territory are a bit blurry, so different people may make different
decisions as to what should be what. Just try to keep in mind
information loss, and potential for exceptions and invalid data, and
that should help you decide."
Bruce Wood, November 16th 2005
http://bytes.com/forum/post1068532-4.html
Casting involves References
List<int> myList = new List<int>();
//up-cast
IEnumerable<int> myEnumerable = (IEnumerable<int>) myList;
//down-cast
List<int> myOtherList = (List<int>) myEnumerable;
Notice that operations against myList, such as adding an element, are reflected in myEnumerable and myOtherList. This is because they are all references (of varying types) to the same instance.
Up-casting is safe. Down-casting can generate run-time errors if the programmer has made a mistake in the type. Safe down-casting is beyond the scope of this answer.
Converting involves Instances
List<int> myList = new List<int>();
int[] myArray = myList.ToArray();
myList is used to produce myArray. This is a non-destructive conversion (myList works perfectly fine after this operation). Also notice that operations against myList, such as adding an element, are not reflected in myArray. This is because they are completely seperate instances.
decimal w = 1.1m;
int x = (int)w;
There are operations using the cast syntax in C# that are actually conversions.
Semantics aside, a quick test shows they are NOT equivalent !
They do the task differently (or perhaps, they do different tasks).
x=-2.5 (int)x=-2 Convert.ToInt32(x)=-2
x=-1.5 (int)x=-1 Convert.ToInt32(x)=-2
x=-0.5 (int)x= 0 Convert.ToInt32(x)= 0
x= 0.5 (int)x= 0 Convert.ToInt32(x)= 0
x= 1.5 (int)x= 1 Convert.ToInt32(x)= 2
x= 2.5 (int)x= 2 Convert.ToInt32(x)= 2
Notice the x=-1.5 and x=1.5 cases.
A cast is telling the compiler/interperter that the object in fact is of that type (or has a basetype/interface of that type). It's a pretty fast thing to do compared to a convert where it's no longer the compiler/interperter doing the job but a function actualling parsing a string and doing math to convert to a number.
Casting always means changing the data type of an object. This can be done for instance by converting a float value into an integer value, or by reinterpreting the bits. It is usally a language-supported (read: compiler-supported) operation.
The term "converting" is sometimes used for casting, but it is usually done by some library or your own code and does not necessarily result in the same as casting. For example, if you have an imperial weight value and convert it to metric weight, it may stay the same data type (say, float), but become a different number. Another typical example is converting from degrees to radian.
In a language- / framework-agnostic manner of speaking, converting from one type or class to another is known as casting. This is true for .NET as well, as your first four lines show:
object x;
int y;
x = 4;
y = ( int )x;
C and C-like languages (such as C#) use the (newtype)somevar syntax for casting. In VB.NET, for example, there are explicit built-in functions for this. The last line would be written as:
y = CInt(x)
Or, for more complex types:
y = CType(x, newtype)
Where 'C' obviously is short for 'cast'.
.NET also has the Convert() function, however. This isn't a built-in language feature (unlike the above two), but rather one of the framework. This becomes clearer when you use a language that isn't necessarily used together with .NET: they still very likely have their own means of casting, but it's .NET that adds Convert().
As Matt says, the difference in behavior is that Convert() is more explicit. Rather than merely telling the compiler to treat y as an integer equivalent of x, you are specifically telling it to alter x in such a way that is suitable for the integer class, then assign the result to y.
In your particular case, the casting does what is called 'unboxing', whereas Convert() will actually get the integer value. The result will appear the same, but there are subtle differences better explained by Keith.
According to Table 1-7 titled "Methods for Explicit Conversion" on page 55 in Chapter 1, Lesson 4 of MCTS Self-Paced Training Kit (Exam 70-536): Microsoft® .NET Framework 2.0—Application Development Foundation, there is certainly a difference between them.
System.Convert is language-independent and converts "Between types that implement the System.IConvertible interface."
(type) cast operator is a C#-specific language feature that converts "Between types that define conversion operators."
Furthermore, when implementing custom conversions, advice differs between them.
Per the section titled How to Implement Conversion in Custom Types on pp. 56-57 in the lesson cited above, conversion operators (casting) are meant for simplifying conversions between numeric types, whereas Convert() enables culture-specific conversions.
Which technique you choose depends on the type of conversion you want to perform:
Define conversion operators to simplify narrowing and widening
conversions between numeric types.
Implement System.IConvertible to enable conversion through
System.Convert. Use this technique to enable culture-specific conversions.
...
It should be clearer now that since the cast conversion operator is implemented separately from the IConvertible interface, that Convert() is not necessarily merely another name for casting. (But I can envision where one implementation may refer to the other to ensure consistency).
Casting is essentially just telling the runtime to "pretend" the object is the new type. It doesn't actually convert or change the object in any way.
Convert, however, will perform operations to turn one type into another.
As an example:
char caster = '5';
Console.WriteLine((int)caster);
The output of those statements will be 53, because all the runtime did is look at the bit pattern and treat it as an int. What you end up getting is the ascii value of the character 5, rather than the number 5.
If you use Convert.ToInt32(caster) however, you will get 5 because it actually reads the string and modifies it correctly. (Essentially it knows that ASCII value 53 is really the integer value 5.)
The difference there is whether the conversion is implicit or explicit. The first one up there is a cast, the second one is a more explicit call to a function that converts. They probably go about doing the same thing in different ways.