Char to int implicit cast Behind the Scenes - c#

The following is valid in c# as Char can be implicitly cast to int
int i = 'a';
i am just curious about what .Net Framework do behind the scenes, i looked in to char and int types source code but unable to find where it's written.
Can any body explain what happens behind the scene?

In C# we have something that is called a Single-Rooted unified type system. That means every existing type is a subtype of one root type. Object in C#. So char or int is only short (alias) for System.Char and System.Int32. The conversion from an untyped Object to long or char (value types) is called (un)boxing.
This is a real difference compared to Java where there are the primitive types int,char.. and there is Object. Java classes like Int are wrapping the primitive types. https://en.wikipedia.org/wiki/Comparison_of_C_Sharp_and_Java
Because in C# everything is an Object (I am Object, for we are many!), all the value types implement IConvertible, as the others said.
And there is a mostly internally used enum TypeCode that is the runtime reflection info that is assigned to value type object´s like int. The Convert.To method´s are using this type information to do their job.
http://referencesource.microsoft.com/#mscorlib/system/convert.cs
The mscorlib (System namespace) can be found on github under https://github.com/dotnet/coreclr/tree/master/src/mscorlib/src/System
Further because value types are full objects (struct) lots of type/meta information is available during runtime.
C# is a safe(er) language compared to c++ for example. It does not allow unsafe impilict casts.
The cast from char to int is a widening. Int is bigger/has more space than char. It´s safe to cast.
But that´s why it is not the same as (int) 'a' like Joanvo wrote.
This is an explicit cast and you are forcing .Net to do it, no matter it´s safe or not.
For int i = 'a'; it´s fine to do an implicit cast because of the widening.
But if you try char c = 42; Visual Studio and the compiler will not allow it. Unless you force it with an explicit cast while being aware that maybe not all information from the int will fit into the char
This is also the reason why you can not use something different than a boolean (expression) inside an if.
In C/C++ you can write if(1) { } or if(0) {}. That is not allowed in C#, where you have to write if(..==1) {}. In C/C++ this is often used to check pointer for null.
You can overload the implicit and explicit operator for own types, which is useful sometimes. At least useful to know it is possible.
https://msdn.microsoft.com/en-us/library/z5z9kes2.aspx

As #shahwat and #joanvo said its the ASCII value that will be stored in i.
See this,
char value = 'a';
Console.WriteLine(value);
Console.WriteLine((int)value);
Console.WriteLine(value == 'y');
Console.WriteLine(value.GetType());
Console.WriteLine(typeof(char));
Console.WriteLine((int)char.MinValue);
Console.WriteLine((int)char.MaxValue);
Out Put:
a
97 (Integer value of char)
False
System.Char
System.Char
0 (MinValue as an integer)
65535 (MaxValue as an integer)
Edit:
microsoft
On this microsoft reference you can find the implementation of every method.
Just need to click on specific method and you will be navigated to the defination of that method.
You can also add specific VS extension related to your requirement
For ex:
Download VS extensions
by this way you would have a functionality in your VS and you can see the implementation by just right cicking on the methods and selecting "Go to Defination"
For your scenario
int i ='a';
This is not really a method (in some sense).
To view its implementation you have to use
roslyn
I am not sure would this be possible in case of Resharper , but i have just provided a way by which you can get this done.Hope this would be some help to you.
Here is a step by step guide for your scenario:
1.Setup Environment (Install VS,Roslyn) here
roslyn
2.Enable .NET Framework Source debug (if its not enabled by default)
Enable Source Debug
3.Put a break point on your statement int i='a'
4.Press f11 to step through all the implementation which is written behind this statement.
Thats all you have to do.
Imp note:
I would like to add one more thing that if above method does not worked for you ,then it would be something related to MSIL.So in that case you need to analyse what Roslyn generates from this code.
And if this is the case then you can also check it with any Reflector tool that will show you the generated MSIL.

The source code for the .NET primitive types is at http://referencesource.microsoft.com/, as indicated by #Sadaquat.
The specific implementation you're talking about is probably the implementation of IConvertable.ToInt32() from the Char struct.
That simply returns Convert.ToInt32(char value), which itself simply returns the value, since chars are integer values behind the scenes.

Related

Convert Char To Boolean

In C++ is assumed to be false while all other values are true. I was under the impression that in C# this concept was the same.
I'm trying to convert a char to a bool.
char c = (char)0;
Convert.ToBoolean(c).Dump();
It seems like no matter what char I try to convert I always get an error
Invalid cast from 'Char' to 'Boolean
I understand what I can to to fix this if I write my own custom function, but what I am trying to understand is.
What is the purpose of this method, What Char value converts to Bool?
You stated:
I was under the impression that in C# this concept was the same.
You were mistaken. It isn't. The two languages behave differently in that way, and you simply cannot convert a Char to a Boolean.
The documentation makes it clear that the method always fails:
Calling this method always throws InvalidCastException.
and...
Return Value
Type: System.Boolean
This conversion is not supported. No value is returned.
As evidenced by the source for Char.ToBoolean():
[__DynamicallyInvokable]
bool IConvertible.ToBoolean(IFormatProvider provider)
{
object[] values = new object[] { "Char", "Boolean" };
throw new InvalidCastException(Environment.GetResourceString("InvalidCast_FromTo", values));
}
As the Char class inherits from IConvertible, it is required to provide the overload. But since this conversion is not possible, an exception is always returned.
Just to show how it can be done with unsafe keyword (similar to c++). Don't use this..
char c = (char)0;
unsafe{
bool b = *((bool *)&c);
}
What is the purpose of this method...?
From the .NET 2.0 documentation:
This method is reserved for future use.
Perhaps they were considering implementing it along the lines of C++ but eventually decided not to.
EDIT - IConvertible is not the reason
There seems to be some confusion about the char.ToBoolean() method.
((IConvertible)someChar).ToBoolean(...);
is a separate issue from Convert.ToBoolean. (Why the explicit conversion first - see here.) The former had to be implemented when char was made IConvertible. The latter could have simply not been created, so the question of why does that method (Convert.ToBoolean) exists seems to be answered only for backwards compatibility + the original intention.
(If I'm the one misunderstanding this - please let me know, of course.)
This should work:
Convert.ToBoolean(Convert.ToInt32(charVar.ToString()))

C# overload is puzzled [duplicate]

today I discovered a very strange behavior with C# function overloading. The problem occurs when I have a method with 2 overloads, one accepting Object and the other accepting Enum of any type. When I pass 0 as parameter, the Enum version of the method is called. When I use any other integer value, the Object version is called. I know this can be easilly fixed by using explicit casting, but I want to know why the compiler behaves that way. Is this a bug or just some strange language rule I don't know about?
The code below explains the problem (checked with runtime 2.0.50727)
Thanks for any help on this,
Grzegorz Kyc
class Program
{
enum Bar
{
Value1,
Value2,
Value3
}
static void Main(string[] args)
{
Foo(0);
Foo(1);
Console.ReadLine();
}
static void Foo(object a)
{
Console.WriteLine("object");
}
static void Foo(Bar a)
{
Console.WriteLine("enum");
}
}
It may be that you're not aware that there's an implicit conversion from a constant1 of 0 to any enum:
Bar x = 0; // Implicit conversion
Now, the conversion from 0 to Bar is more specific than the conversion from 0 to object, which is why the Foo(Bar) overload is used.
Does that clear everything up?
1 There's actually a bug in the Microsoft C# compiler which lets it be any zero constant, not just an integer:
const decimal DecimalZero = 0.0m;
...
Bar x = DecimalZero;
It's unlikely that this will ever be fixed, as it could break existing working code. I believe Eric Lippert has a two blog posts which go into much more detail.
The C# specification section 6.1.3 (C# 4 spec) has this to say about it:
An implicit enumeration conversion
permits the decimal-integer-literal 0
to be converted to any enum-type and
to any nullable-type whose underlying
type is an enum-type. In the latter
case the conversion is evaluated by
converting to the underlying enum-type
and wrapping the result (§4.1.10).
That actually suggests that the bug isn't just in allowing the wrong type, but allowing any constant 0 value to be converted rather than only the literal value 0.
EDIT: It looks like the "constant" part was partially introduced in the C# 3 compiler. Previously it was some constant values, now it looks like it's all of them.
I know I have read somewhere else that the .NET system always treats zero as a valid enumeration value, even if it actually isn't. I will try to find some reference for this...
OK, well I found this, which quotes the following and attributes it to Eric Gunnerson:
Enums in C# do dual purpose. They are used for the usual enum use, and they're also used for bit fields. When I'm dealing with bit fields, you often want to AND a value with the bit field and check if it's true.
Our initial rules meant that you had to write:
if ((myVar & MyEnumName.ColorRed) != (MyEnumName) 0)
which we thought was difficult to read. One alernative was to define a zero entry:
if ((myVar & MyEnumName.ColorRed) != MyEnumName.NoBitsSet)
which was also ugly.
We therefore decided to relax our rules a bit, and permit an implicit conversion from the literal zero to any enum type, which allows you to write:
if ((myVar & MyEnumName.ColorRed) != 0)
which is why PlayingCard(0, 0) works.
So it appears that the whole reason behind this was to simply allow equating to zero when checking flags without having to cast the zero.

Strange (possibly wrong?) C# compiler behavior with method overloading and enums

today I discovered a very strange behavior with C# function overloading. The problem occurs when I have a method with 2 overloads, one accepting Object and the other accepting Enum of any type. When I pass 0 as parameter, the Enum version of the method is called. When I use any other integer value, the Object version is called. I know this can be easilly fixed by using explicit casting, but I want to know why the compiler behaves that way. Is this a bug or just some strange language rule I don't know about?
The code below explains the problem (checked with runtime 2.0.50727)
Thanks for any help on this,
Grzegorz Kyc
class Program
{
enum Bar
{
Value1,
Value2,
Value3
}
static void Main(string[] args)
{
Foo(0);
Foo(1);
Console.ReadLine();
}
static void Foo(object a)
{
Console.WriteLine("object");
}
static void Foo(Bar a)
{
Console.WriteLine("enum");
}
}
It may be that you're not aware that there's an implicit conversion from a constant1 of 0 to any enum:
Bar x = 0; // Implicit conversion
Now, the conversion from 0 to Bar is more specific than the conversion from 0 to object, which is why the Foo(Bar) overload is used.
Does that clear everything up?
1 There's actually a bug in the Microsoft C# compiler which lets it be any zero constant, not just an integer:
const decimal DecimalZero = 0.0m;
...
Bar x = DecimalZero;
It's unlikely that this will ever be fixed, as it could break existing working code. I believe Eric Lippert has a two blog posts which go into much more detail.
The C# specification section 6.1.3 (C# 4 spec) has this to say about it:
An implicit enumeration conversion
permits the decimal-integer-literal 0
to be converted to any enum-type and
to any nullable-type whose underlying
type is an enum-type. In the latter
case the conversion is evaluated by
converting to the underlying enum-type
and wrapping the result (§4.1.10).
That actually suggests that the bug isn't just in allowing the wrong type, but allowing any constant 0 value to be converted rather than only the literal value 0.
EDIT: It looks like the "constant" part was partially introduced in the C# 3 compiler. Previously it was some constant values, now it looks like it's all of them.
I know I have read somewhere else that the .NET system always treats zero as a valid enumeration value, even if it actually isn't. I will try to find some reference for this...
OK, well I found this, which quotes the following and attributes it to Eric Gunnerson:
Enums in C# do dual purpose. They are used for the usual enum use, and they're also used for bit fields. When I'm dealing with bit fields, you often want to AND a value with the bit field and check if it's true.
Our initial rules meant that you had to write:
if ((myVar & MyEnumName.ColorRed) != (MyEnumName) 0)
which we thought was difficult to read. One alernative was to define a zero entry:
if ((myVar & MyEnumName.ColorRed) != MyEnumName.NoBitsSet)
which was also ugly.
We therefore decided to relax our rules a bit, and permit an implicit conversion from the literal zero to any enum type, which allows you to write:
if ((myVar & MyEnumName.ColorRed) != 0)
which is why PlayingCard(0, 0) works.
So it appears that the whole reason behind this was to simply allow equating to zero when checking flags without having to cast the zero.

Why does this work?

Why does this work? I'm not complaining, just want to know.
void Test()
{
int a = 1;
int b = 2;
What<int>(a, b);
// Why does this next line work?
What(a, b);
}
void What<T>(T a, T b)
{
}
It works because a and b are integers, so the compiler can infer the generic type argument for What.
In C# 3, the compiler can also infer the type argument even when the types don't match, as long as a widening conversion makes sense. For instance, if c were a long, then What(a, c) would be interpreted as What<long>.
Note that if, say, c were a string, it wouldn't work.
The C# compiler supports type inference for generics, and also commonly seen if you are using the var keyword.
Here int is inferred from the context (a and b) and so <int> is not needed. It keeps code cleaner and easier to read at times.
Sometimes your code may be more clear to read if you let the compiler infer the type, sometimes it may be more clear if you explicitly specify the type. It is a judgement call on your given situation.
It's using type inference for generic methods. Note that this has changed between C# 2 and 3. For example, this wouldn't have worked in C# 2:
What("hello", new object());
... whereas it would in C# 3 (or 4). In C# 2, the type inference was performed on a per-argument basis, and the results had to match exactly. In C# 3, each argument contributes information which is then put together to infer the type arguments. C# 3 also supports multi-phase type inference where the compiler can work out one type argument, then see if it's got any more information on the rest (e.g. due to lambda expressions with implicit parameter types). Basically it keeps going until it can't get any more information, or it finishes - or it sees contradictory information. The type inference in C# isn't as powerful as the Hindley-Milner algorithm, but it works better in other ways (in particular it always makes forward progress).
See section 7.4.2 of the C# 3 spec for more information.
The compiler infers the generic type parameter from the types of the actual parameters that you passed.
This feature makes LINQ calls much simpler. (You don't need to write numbers.Select<int, string>(i => i.ToString()), because the compiler infers the int from numbers and the string from ToString)
The compiler can infer type T to be an int since both parameters passed into What() are of type int. You'll notice a lot of the Linq extensions are defined with generics (as IEnumerable) but are typically used in the manner you show.
If the subject of how this works in C# 3.0 is interesting to you, here's a little video of me explaining it from back in 2006, when we were first designing the version of the feature for C# 3.0.
http://blogs.msdn.com/ericlippert/archive/2006/11/17/a-face-made-for-email-part-three.aspx
See also the "type inference" section of my blog:
http://blogs.msdn.com/ericlippert/archive/tags/Type+Inference/default.aspx
The compiler is smart enough to figure out that the generic type is 'int'

What is the difference between Convert.ToInt32 and (int)?

The following code throws an compile-time error like
Cannot convert type 'string' to 'int'
string name = Session["name1"].ToString();
int i = (int)name;
whereas the code below compiles and executes successfully:
string name = Session["name1"].ToString();
int i = Convert.ToInt32(name);
I would like to know:
Why does the the first code generate a compile-time error?
What's the difference between the 2 code snippets?
(int)foo is simply a cast to the Int32 (int in C#) type. This is built into the CLR and requires that foo be a numeric variable (e.g. float, long, etc.) In this sense, it is very similar to a cast in C.
Convert.ToInt32 is designed to be a general conversion function. It does a good deal more than casting; namely, it can convert from any primitive type to a int (most notably, parsing a string). You can see the full list of overloads for this method here on MSDN.
And as Stefan Steiger mentions in a comment:
Also, note that on a numerical level, (int) foo truncates foo (ifoo = Math.Floor(foo)), while Convert.ToInt32(foo) uses half to even rounding (rounds x.5 to the nearest EVEN integer, meaning ifoo = Math.Round(foo)). The result is thus not just implementation-wise, but also numerically not the same.
(this line relates to a question that was merged) You should never use (int)someString - that will never work (and the compiler won't let you).
However, int int.Parse(string) and bool int.TryParse(string, out int) (and their various overloads) are fair game.
Personally, I mainly only use Convert when I'm dealing with reflection, so for me the choice is Parse and TryParse. The first is when I expect the value to be a valid integer, and want it to throw an exception otherwise. The second is when I want to check if it is a valid integer - I can then decide what to do when it is/isn't.
To quote from this Eric Lippert article:
Cast means two contradictory things: "check to see if this object really is of this type, throw if it is not" and "this object is not of the given type; find me an equivalent value that belongs to the given type".
So what you were trying to do in 1.) is assert that yes a String is an Int. But that assertion fails since String is not an int.
The reason 2.) succeeds is because Convert.ToInt32() parses the string and returns an int. It can still fail, for example:
Convert.ToInt32("Hello");
Would result in an Argument exception.
To sum up, converting from a String to an Int is a framework concern, not something implicit in the .Net type system.
A string cannot be cast to an int through explicit casting. It must be converted using int.Parse.
Convert.ToInt32 basically wraps this method:
public static int ToInt32(string value)
{
if (value == null)
{
return 0;
}
return int.Parse(value, CultureInfo.CurrentCulture);
}
You're talking about a C# casting operation vs .NET Conversion utilities
C# Language-level casting uses parenthesis - e.g. (int) - and conversion support for it is limited, relying on implicit compatibility between the types, or explicitly defined instructions by the developer via conversion operators.
Many conversion methods exist in the .NET Framework, e.g. System.Convert, to allow conversion between same or disparate data types.
(Casting) syntax works on numeric data types, and also on "compatible" data types. Compatible means data types for which there is a relationship established through inheritance (i.e. base/derived classes) or through implementation (i.e. interfaces).
Casting can also work between disparate data types that have conversion operators defined.
The System.Convert class on the other hand is one of many available mechanisms to convert things in the general sense; it contains logic to convert between disparate, known, data types that can be logically changed from one form into another.
Conversion even covers some of the same ground as casting by allowing conversion between similar data types.
Remember that the C# language has its own way of doing some things.
And the underlying .NET Framework has its own way of doing things, apart from any programming language.
(Sometimes they overlap in their intentions.)
Think of casting as a C# language-level feature that is more limited in nature, and conversion via the System.Convert class as one of many available mechanisms in the .NET framework to convert values between different kinds.
There is not a default cast from string to int in .NET. You can use int.Parse() or int.TryParse() to do this. Or, as you have done, you can use Convert.ToInt32().
However, in your example, why do a ToString() and then convert it back to an int at all? You could simply store the int in Session and retrieve it as follows:
int i = Session["name1"];
Just a brief extra: in different circumstances (e.g. if you're converting a double, &c to an Int32) you might also want to worry about rounding when choosing between these two. Convert.Int32 will use banker's rounding (MSDN); (int) will just truncate to an integer.
1) C# is type safe language and doesn't allow you to assign string to number
2) second case parses the string to new variable.
In your case if the Session is ASP.NET session than you don't have to store string there and convert it back when retrieving
int iVal = 5;
Session[Name1] = 5;
int iVal1 = (int)Session[Name1];
This is already discussed but I want to share a dotnetfiddle.
If you are dealing with arithmetic operations and using float, decimal, double and so on, you should better use Convert.ToInt32().
using System;
public class Program
{
public static void Main()
{
double cost = 49.501;
Console.WriteLine(Convert.ToInt32(cost));
Console.WriteLine((int)cost);
}
}
Output
50
49
https://dotnetfiddle.net/m3ddDQ
Convert.ToInt32
return int.Parse(value, CultureInfo.CurrentCulture);
but (int) is type cast, so (int)"2" will not work since you cannot cast string to int. but you can parse it like Convert.ToInt32 do
The difference is that the first snippet is a cast and the second is a convert. Although, I think perhaps the compiler error is providing more confusion here because of the wording. Perhaps it would be better if it said "Cannot cast type 'string' to 'int'.
This is old, but another difference is that (int) doesn't round out the numbers in case you have a double ej: 5.7 the ouput using (int) will be 5 and if you use Convert.ToInt() the number will be round out to 6.

Categories