Why is Convert.ToDouble(char) not supported? - c#

From the msdn page :
public static double ToDouble(
char value
)
Parameters
value
Type: System.Char
The Unicode character to convert.
Return Value
Type: System.Double
This conversion is not supported. No value is returned.
If it is not supported, why is it implemented in the first place ?

It is not the only one. Convert.ToBoolean(char), ToDateTime, ToDecimal and ToSingle are also not supported, they all throw InvalidCastException like ToDouble does.
This is just .NET design trying to keep you out of trouble. Converting a char to an integral type is reasonable, you can look at the Unicode mapping tables and count the codepoints. But what would a conversion to Boolean mean? What Unicode code point is True? ToDateTime requires no explanation. How could a character ever be a fractional value? There are no half or quarter codepoints.
You can make it work, convert to Int32 first and then convert to Double. But by all means, check your code and make sure that it is a senseful thing to do. The .NET designers thought it wasn't. They were right.

As per MSDN, this function is keep reserved for future use in .Net 2.0 and it's kept till 4.5 for supporting previous version of .net frameworks.
They will implement this if future OS will support this type of conversion. Currently, OS stores the char as int, so not providing the way to cast char to double due to lots of inner conversions.
Due to internal storage format, same limitation is with Convert.ToDouble(DateTime).

Each character has a corresponding integer. For example:
Convert.ToInt16('a') -> returns 97.
I guess the main reason why the Convert doesn't support to convert from a char the other types is that the second nature of a character is the integer type.
Maybe a more clear example is the following code:
char a = 'a';
int aVal = (int)a;
Which actually Convert.ToInt32 does ( but also raises the overflow exception)

A char can be implicitly converted to ushort, int, uint, long, ulong, float, double, or decimal.

Related

Why is does using operators on short (Struct.Int16) result in Struct.Int32 at Runtime, and sometimes does not require any casting at compile time? [duplicate]

Previously today I was trying to add two ushorts and I noticed that I had to cast the result back to ushort. I thought it might've become a uint (to prevent a possible unintended overflow?), but to my surprise it was an int (System.Int32).
Is there some clever reason for this or is it maybe because int is seen as the 'basic' integer type?
Example:
ushort a = 1;
ushort b = 2;
ushort c = a + b; // <- "Cannot implicitly convert type 'int' to 'ushort'. An explicit conversion exists (are you missing a cast?)"
uint d = a + b; // <- "Cannot implicitly convert type 'int' to 'uint'. An explicit conversion exists (are you missing a cast?)"
int e = a + b; // <- Works!
Edit: Like GregS' answer says, the C# spec says that both operands (in this example 'a' and 'b') should be converted to int. I'm interested in the underlying reason for why this is part of the spec: why doesn't the C# spec allow for operations directly on ushort values?
The simple and correct answer is "because the C# Language Specification says so".
Clearly you are not happy with that answer and want to know "why does it say so". You are looking for "credible and/or official sources", that's going to be a bit difficult. These design decisions were made a long time ago, 13 years is a lot of dog lives in software engineering. They were made by the "old timers" as Eric Lippert calls them, they've moved on to bigger and better things and don't post answers here to provide an official source.
It can be inferred however, at a risk of merely being credible. Any managed compiler, like C#'s, has the constraint that it needs to generate code for the .NET virtual machine. The rules for which are carefully (and quite readably) described in the CLI spec. It is the Ecma-335 spec, you can download it for free from here.
Turn to Partition III, chapter 3.1 and 3.2. They describe the two IL instructions available to perform an addition, add and add.ovf. Click the link to Table 2, "Binary Numeric Operations", it describes what operands are permissible for those IL instructions. Note that there are just a few types listed there. byte and short as well as all unsigned types are missing. Only int, long, IntPtr and floating point (float and double) is allowed. With additional constraints marked by an x, you can't add an int to a long for example. These constraints are not entirely artificial, they are based on things you can do reasonably efficient on available hardware.
Any managed compiler has to deal with this in order to generate valid IL. That isn't difficult, simply convert the ushort to a larger value type that's in the table, a conversion that's always valid. The C# compiler picks int, the next larger type that appears in the table. Or in general, convert any of the operands to the next largest value type so they both have the same type and meet the constraints in the table.
Now there's a new problem however, a problem that drives C# programmers pretty nutty. The result of the addition is of the promoted type. In your case that will be int. So adding two ushort values of, say, 0x9000 and 0x9000 has a perfectly valid int result: 0x12000. Problem is: that's a value that doesn't fit back into an ushort. The value overflowed. But it didn't overflow in the IL calculation, it only overflows when the compiler tries to cram it back into an ushort. 0x12000 is truncated to 0x2000. A bewildering different value that only makes some sense when you count with 2 or 16 fingers, not with 10.
Notable is that the add.ovf instruction doesn't deal with this problem. It is the instruction to use to automatically generate an overflow exception. But it doesn't, the actual calculation on the converted ints didn't overflow.
This is where the real design decision comes into play. The old-timers apparently decided that simply truncating the int result to ushort was a bug factory. It certainly is. They decided that you have to acknowledge that you know that the addition can overflow and that it is okay if it happens. They made it your problem, mostly because they didn't know how to make it theirs and still generate efficient code. You have to cast. Yes, that's maddening, I'm sure you didn't want that problem either.
Quite notable is that the VB.NET designers took a different solution to the problem. They actually made it their problem and didn't pass the buck. You can add two UShorts and assign it to an UShort without a cast. The difference is that the VB.NET compiler actually generates extra IL to check for the overflow condition. That's not cheap code, makes every short addition about 3 times as slow. But otherwise the reason that explains why Microsoft maintains two languages that have otherwise very similar capabilities.
Long story short: you are paying a price because you use a type that's not a very good match with modern cpu architectures. Which in itself is a Really Good Reason to use uint instead of ushort. Getting traction out of ushort is difficult, you'll need a lot of them before the cost of manipulating them out-weighs the memory savings. Not just because of the limited CLI spec, an x86 core takes an extra cpu cycle to load a 16-bit value because of the operand prefix byte in the machine code. Not actually sure if that is still the case today, it used to be back when I still paid attention to counting cycles. A dog year ago.
Do note that you can feel better about these ugly and dangerous casts by letting the C# compiler generate the same code that the VB.NET compiler generates. So you get an OverflowException when the cast turned out to be unwise. Use Project > Properties > Build tab > Advanced button > tick the "Check for arithmetic overflow/underflow" checkbox. Just for the Debug build. Why this checkbox isn't turned on automatically by the project template is another very mystifying question btw, a decision that was made too long ago.
ushort x = 5, y = 12;
The following assignment statement will produce a compilation error, because the arithmetic expression on the right-hand side of the assignment operator evaluates to int by default.
ushort z = x + y; // Error: conversion from int to ushort
http://msdn.microsoft.com/en-us/library/cbf1574z(v=vs.71).aspx
EDIT:
In case of arithmetic operations on ushort, the operands are converted to a type which can hold all values. So that overflow can be avoided. Operands can change in the order of int, uint, long and ulong.
Please see the C# Language Specification In this document go to section 4.1.5 Integral types (around page 80 in the word document). Here you will find:
For the binary +, –, *, /, %, &, ^, |, ==, !=, >, <, >=, and <=
operators, the operands are converted to type T, where T is the first
of int, uint, long, and ulong that can fully represent all possible
values of both operands. The operation is then performed using the
precision of type T, and the type of the result is T (or bool for the
relational operators). It is not permitted for one operand to be of
type long and the other to be of type ulong with the binary operators.
Eric Lipper has stated in a question
Arithmetic is never done in shorts in C#. Arithmetic can be done in
ints, uints, longs and ulongs, but arithmetic is never done in shorts.
Shorts promote to int and the arithmetic is done in ints, because like
I said before, the vast majority of arithmetic calculations fit into
an int. The vast majority do not fit into a short. Short arithmetic is
possibly slower on modern hardware which is optimized for ints, and
short arithmetic does not take up any less space; it's going to be
done in ints or longs on the chip.
From the C# language spec:
7.3.6.2 Binary numeric promotions
Binary numeric promotion occurs for the operands of the predefined +, –, *, /, %, &, |, ^, ==, !=, >, <, >=, and <= binary operators. Binary numeric promotion implicitly converts both operands to a common type which, in case of the non-relational operators, also becomes the result type of the operation. Binary numeric promotion consists of applying the following rules, in the order they appear here:
· If either operand is of type decimal, the other operand is converted to type decimal, or a binding-time error occurs if the other operand is of type float or double.
· Otherwise, if either operand is of type double, the other operand is converted to type double.
· Otherwise, if either operand is of type float, the other operand is converted to type float.
· Otherwise, if either operand is of type ulong, the other operand is converted to type ulong, or a binding-time error occurs if the other operand is of type sbyte, short, int, or long.
· Otherwise, if either operand is of type long, the other operand is converted to type long.
· Otherwise, if either operand is of type uint and the other operand is of type sbyte, short, or int, both operands are converted to type long.
· Otherwise, if either operand is of type uint, the other operand is converted to type uint.
· Otherwise, both operands are converted to type int.
There is no reason that is intended. This is just an effect or applying the rules of overload resolution which state that the first overload for whose parameters there is an implicit conversion that fit the arguments, that overload will be used.
This is stated in the C# Specification, section 7.3.6 as follows:
Numeric promotion is not a distinct mechanism, but rather an effect of applying overload resolution to the predefined operators.
It goes on illustrating with an example:
As an example of numeric promotion, consider the predefined implementations of the binary * operator:
int operator *(int x, int y);
uint operator *(uint x, uint y);
long operator *(long x, long y);
ulong operator *(ulong x, ulong y);
float operator *(float x, float y);
double operator *(double x, double y);
decimal operator *(decimal x, decimal y);
When overload resolution rules (§7.5.3) are applied to this set of operators, the effect is to select the first of the operators for which implicit conversions exist from the operand types. For example, for the operation b * s, where b is a byte and s is a short, overload resolution selects operator *(int, int) as the best operator.
Your question is in fact, a bit tricky. The reason why this specification is part of the language is... because they took that decision when they created the language. I know this sounds like a disappointing answer, but that's just how it is.
However, the real answer would probably involve many context decision back in the day in 1999-2000. I am sure the team who made C# had pretty robust debates about all those language details.
...
C# is intended to be a simple, modern, general-purpose, object-oriented programming language.
Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
Support for internationalization is very important.
...
The quote above is from Wikipedia C#
All of those design goals might have influenced their decision. For instance, in the year 2000, most of the system were already native 32-bits, so they might have decided to limit the number of variable smaller than that, since it will be converted anyway on 32-bits when performing arithmetic operations. Which is generally slower.
At that point, you might ask me; if there is implicit conversion on those types why did they included them anyway? Well one of their design goals, as quoted above, is portability.
Thus, if you need to write a C# wrapper around an old C or C++ program you might need those type to store some values. In that case, those types are pretty handy.
That's a decision Java did not make. For instance, if you write a Java program that interacts with a C++ program in which way your are received ushort values, well Java only has short (which are signed) so you can't easily assign one to another and expect correct values.
I let you bet, next available type that could receive such value in Java is int (32-bits of course). You have just doubled your memory here. Which might not be a big deal, instead you have to instantiate an array of 100 000 elements.
In fact, We must remember that those decision are been made by looking at the past and the future in order provide a smooth transfer from one to another.
But now I feel that I am diverging of the initial question.
So your question is a good one and hopefully I was able to bring some answers to you, even though if I know that's probably not what you wanted to hear.
If you'd like, you could even read more about the C# spec, links below. There is some interesting documentation that might be interesting for you.
Integral types
The checked and unchecked operators
Implicit Numeric Conversions Table
By the way, I believe you should probably reward habib-osu for it, since he provided a fairly good answer to the initial question with a proper link. :)
Regards

Any reason to prefer single precision to double precision data type?

I am used to choosing the smallest data type needed to fully represent my values while preserving semantics. I don't use long when int is guaranteed to suffice. Same for int vs short.
But for real numbers, in C# there is the commonly used double -- and no corresponding single or float. I can still use System.Single, but I wonder why C# didn't bother to make it into a language keyword like they did with double.
In contrast, there are language keywords short, int, long, ushort, uint, and ulong.
So, is this a signal to developers that single-precision is antiquated, deprecated, or should otherwise not be used in favor of double or decimal?
(Needless to say, single-precision has the downside of less precision. That's a well-known tradeoff for smaller size, so let's not focus on that.)
edit: My apologies, I mistakenly thought that float isn't a keyword in C#. But it is, which renders this question moot.
The float alias represents the .NET System.Single data type so I would say it's safe to use.
There is the following correspondence between C# keywords and .NET type names:
double - System.Double
float - System.Single
So there's one keyword in C# for each of the two types in question.
I don't know how you got the impression that float was not a C# keyword. It certainly is.
Actually there is a "float" keywork for single precision floating point.
Also don't be so sure about short or byte being a better fit than int. int is usually the best choice for integer numbers, read more about it here: Why should I use int instead of a byte or short in C#
As a default, any literal like 2.0 is automatically interpreted as a double unless otherwise specified. This could contribute to the consistently higher use of double than other floating-point representations. Just a side note.
As far as the absence of single goes, the float keyword translates to the System.Single type.
C# System.Single is aliased to
float

Why is ushort + ushort equal to int?

Previously today I was trying to add two ushorts and I noticed that I had to cast the result back to ushort. I thought it might've become a uint (to prevent a possible unintended overflow?), but to my surprise it was an int (System.Int32).
Is there some clever reason for this or is it maybe because int is seen as the 'basic' integer type?
Example:
ushort a = 1;
ushort b = 2;
ushort c = a + b; // <- "Cannot implicitly convert type 'int' to 'ushort'. An explicit conversion exists (are you missing a cast?)"
uint d = a + b; // <- "Cannot implicitly convert type 'int' to 'uint'. An explicit conversion exists (are you missing a cast?)"
int e = a + b; // <- Works!
Edit: Like GregS' answer says, the C# spec says that both operands (in this example 'a' and 'b') should be converted to int. I'm interested in the underlying reason for why this is part of the spec: why doesn't the C# spec allow for operations directly on ushort values?
The simple and correct answer is "because the C# Language Specification says so".
Clearly you are not happy with that answer and want to know "why does it say so". You are looking for "credible and/or official sources", that's going to be a bit difficult. These design decisions were made a long time ago, 13 years is a lot of dog lives in software engineering. They were made by the "old timers" as Eric Lippert calls them, they've moved on to bigger and better things and don't post answers here to provide an official source.
It can be inferred however, at a risk of merely being credible. Any managed compiler, like C#'s, has the constraint that it needs to generate code for the .NET virtual machine. The rules for which are carefully (and quite readably) described in the CLI spec. It is the Ecma-335 spec, you can download it for free from here.
Turn to Partition III, chapter 3.1 and 3.2. They describe the two IL instructions available to perform an addition, add and add.ovf. Click the link to Table 2, "Binary Numeric Operations", it describes what operands are permissible for those IL instructions. Note that there are just a few types listed there. byte and short as well as all unsigned types are missing. Only int, long, IntPtr and floating point (float and double) is allowed. With additional constraints marked by an x, you can't add an int to a long for example. These constraints are not entirely artificial, they are based on things you can do reasonably efficient on available hardware.
Any managed compiler has to deal with this in order to generate valid IL. That isn't difficult, simply convert the ushort to a larger value type that's in the table, a conversion that's always valid. The C# compiler picks int, the next larger type that appears in the table. Or in general, convert any of the operands to the next largest value type so they both have the same type and meet the constraints in the table.
Now there's a new problem however, a problem that drives C# programmers pretty nutty. The result of the addition is of the promoted type. In your case that will be int. So adding two ushort values of, say, 0x9000 and 0x9000 has a perfectly valid int result: 0x12000. Problem is: that's a value that doesn't fit back into an ushort. The value overflowed. But it didn't overflow in the IL calculation, it only overflows when the compiler tries to cram it back into an ushort. 0x12000 is truncated to 0x2000. A bewildering different value that only makes some sense when you count with 2 or 16 fingers, not with 10.
Notable is that the add.ovf instruction doesn't deal with this problem. It is the instruction to use to automatically generate an overflow exception. But it doesn't, the actual calculation on the converted ints didn't overflow.
This is where the real design decision comes into play. The old-timers apparently decided that simply truncating the int result to ushort was a bug factory. It certainly is. They decided that you have to acknowledge that you know that the addition can overflow and that it is okay if it happens. They made it your problem, mostly because they didn't know how to make it theirs and still generate efficient code. You have to cast. Yes, that's maddening, I'm sure you didn't want that problem either.
Quite notable is that the VB.NET designers took a different solution to the problem. They actually made it their problem and didn't pass the buck. You can add two UShorts and assign it to an UShort without a cast. The difference is that the VB.NET compiler actually generates extra IL to check for the overflow condition. That's not cheap code, makes every short addition about 3 times as slow. But otherwise the reason that explains why Microsoft maintains two languages that have otherwise very similar capabilities.
Long story short: you are paying a price because you use a type that's not a very good match with modern cpu architectures. Which in itself is a Really Good Reason to use uint instead of ushort. Getting traction out of ushort is difficult, you'll need a lot of them before the cost of manipulating them out-weighs the memory savings. Not just because of the limited CLI spec, an x86 core takes an extra cpu cycle to load a 16-bit value because of the operand prefix byte in the machine code. Not actually sure if that is still the case today, it used to be back when I still paid attention to counting cycles. A dog year ago.
Do note that you can feel better about these ugly and dangerous casts by letting the C# compiler generate the same code that the VB.NET compiler generates. So you get an OverflowException when the cast turned out to be unwise. Use Project > Properties > Build tab > Advanced button > tick the "Check for arithmetic overflow/underflow" checkbox. Just for the Debug build. Why this checkbox isn't turned on automatically by the project template is another very mystifying question btw, a decision that was made too long ago.
ushort x = 5, y = 12;
The following assignment statement will produce a compilation error, because the arithmetic expression on the right-hand side of the assignment operator evaluates to int by default.
ushort z = x + y; // Error: conversion from int to ushort
http://msdn.microsoft.com/en-us/library/cbf1574z(v=vs.71).aspx
EDIT:
In case of arithmetic operations on ushort, the operands are converted to a type which can hold all values. So that overflow can be avoided. Operands can change in the order of int, uint, long and ulong.
Please see the C# Language Specification In this document go to section 4.1.5 Integral types (around page 80 in the word document). Here you will find:
For the binary +, –, *, /, %, &, ^, |, ==, !=, >, <, >=, and <=
operators, the operands are converted to type T, where T is the first
of int, uint, long, and ulong that can fully represent all possible
values of both operands. The operation is then performed using the
precision of type T, and the type of the result is T (or bool for the
relational operators). It is not permitted for one operand to be of
type long and the other to be of type ulong with the binary operators.
Eric Lipper has stated in a question
Arithmetic is never done in shorts in C#. Arithmetic can be done in
ints, uints, longs and ulongs, but arithmetic is never done in shorts.
Shorts promote to int and the arithmetic is done in ints, because like
I said before, the vast majority of arithmetic calculations fit into
an int. The vast majority do not fit into a short. Short arithmetic is
possibly slower on modern hardware which is optimized for ints, and
short arithmetic does not take up any less space; it's going to be
done in ints or longs on the chip.
From the C# language spec:
7.3.6.2 Binary numeric promotions
Binary numeric promotion occurs for the operands of the predefined +, –, *, /, %, &, |, ^, ==, !=, >, <, >=, and <= binary operators. Binary numeric promotion implicitly converts both operands to a common type which, in case of the non-relational operators, also becomes the result type of the operation. Binary numeric promotion consists of applying the following rules, in the order they appear here:
· If either operand is of type decimal, the other operand is converted to type decimal, or a binding-time error occurs if the other operand is of type float or double.
· Otherwise, if either operand is of type double, the other operand is converted to type double.
· Otherwise, if either operand is of type float, the other operand is converted to type float.
· Otherwise, if either operand is of type ulong, the other operand is converted to type ulong, or a binding-time error occurs if the other operand is of type sbyte, short, int, or long.
· Otherwise, if either operand is of type long, the other operand is converted to type long.
· Otherwise, if either operand is of type uint and the other operand is of type sbyte, short, or int, both operands are converted to type long.
· Otherwise, if either operand is of type uint, the other operand is converted to type uint.
· Otherwise, both operands are converted to type int.
There is no reason that is intended. This is just an effect or applying the rules of overload resolution which state that the first overload for whose parameters there is an implicit conversion that fit the arguments, that overload will be used.
This is stated in the C# Specification, section 7.3.6 as follows:
Numeric promotion is not a distinct mechanism, but rather an effect of applying overload resolution to the predefined operators.
It goes on illustrating with an example:
As an example of numeric promotion, consider the predefined implementations of the binary * operator:
int operator *(int x, int y);
uint operator *(uint x, uint y);
long operator *(long x, long y);
ulong operator *(ulong x, ulong y);
float operator *(float x, float y);
double operator *(double x, double y);
decimal operator *(decimal x, decimal y);
When overload resolution rules (§7.5.3) are applied to this set of operators, the effect is to select the first of the operators for which implicit conversions exist from the operand types. For example, for the operation b * s, where b is a byte and s is a short, overload resolution selects operator *(int, int) as the best operator.
Your question is in fact, a bit tricky. The reason why this specification is part of the language is... because they took that decision when they created the language. I know this sounds like a disappointing answer, but that's just how it is.
However, the real answer would probably involve many context decision back in the day in 1999-2000. I am sure the team who made C# had pretty robust debates about all those language details.
...
C# is intended to be a simple, modern, general-purpose, object-oriented programming language.
Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
Support for internationalization is very important.
...
The quote above is from Wikipedia C#
All of those design goals might have influenced their decision. For instance, in the year 2000, most of the system were already native 32-bits, so they might have decided to limit the number of variable smaller than that, since it will be converted anyway on 32-bits when performing arithmetic operations. Which is generally slower.
At that point, you might ask me; if there is implicit conversion on those types why did they included them anyway? Well one of their design goals, as quoted above, is portability.
Thus, if you need to write a C# wrapper around an old C or C++ program you might need those type to store some values. In that case, those types are pretty handy.
That's a decision Java did not make. For instance, if you write a Java program that interacts with a C++ program in which way your are received ushort values, well Java only has short (which are signed) so you can't easily assign one to another and expect correct values.
I let you bet, next available type that could receive such value in Java is int (32-bits of course). You have just doubled your memory here. Which might not be a big deal, instead you have to instantiate an array of 100 000 elements.
In fact, We must remember that those decision are been made by looking at the past and the future in order provide a smooth transfer from one to another.
But now I feel that I am diverging of the initial question.
So your question is a good one and hopefully I was able to bring some answers to you, even though if I know that's probably not what you wanted to hear.
If you'd like, you could even read more about the C# spec, links below. There is some interesting documentation that might be interesting for you.
Integral types
The checked and unchecked operators
Implicit Numeric Conversions Table
By the way, I believe you should probably reward habib-osu for it, since he provided a fairly good answer to the initial question with a proper link. :)
Regards

Instead of error, why don't both operands get promoted to float or double?

1) If one operand is of type ulong, while the other operand is of type sbyte/short/int/long, then compile-time error occurs. I fail to see the logic in this. Thus, why would it be bad idea for both operands to instead be promoted to type double or float?
long L = 100;
ulong UL = 1000;
double d = L + UL; // error saying + operator can't be applied
to operands of type ulong and long
b) Compiler implicitly converts int literal into byte type and assigns resulting value to b:
byte b = 1;
But if we try to assign a literal of type ulong to type long(or to types int, byte etc), then compiler reports an error:
long L = 1000UL;
I would think compiler would be able to figure out whether result of constant expression could fit into variable of type long?!
thank you
To answer the question marked (1) -- adding signed and unsigned longs is probably a mistake. If the intention of the developer is to overflow into inexact arithmetic in this scenario then that's something they should do explicitly, by casting both arguments to double. Doing so implicitly is hiding mistakes more often than it is doing the right thing.
To answer the question marked (b) -- of course the compiler could figure that out. Obviously it can because it does so for integer literals. But again, this is almost certainly an error. If your intention was to make that a signed long then why did you mark it as unsigned? This looks like a mistake. C# has been carefully designed so that it looks for weird patterns like this and calls your attention to them, rather than making a guess that you meant to say this weird thing and blazing on ahead as if everything were normal. The compiler is trying to encourage you to write sensible code; sensible code does not mix signed and unsigned types.
Why should it?
Generally, the 2 types are incompatible because long is signed. You are only describing a special case.
For byte b = 1; 1 has no implicit type as such and can be coerced into byte
For long L = 1000UL; "1000UL" does have an explicit type and is incompatible and see my general case above.
Example from "ulong" on MSDN:
When an integer literal has no suffix,
its type is the first of these types
in which its value can be represented:
int, uint, long, ulong.
and then
There is no implicit conversion from
ulong to any integral type
On "long" in MSDN (my bold)
When an integer literal has no suffix,
its type is the first of these types
in which its value can be represented:
int, uint, long, ulong.
It's quite common and logical and utterly predictable
long l = 100;
ulong ul = 1000;
double d = l + ul; // error
Why would it be bad idea for both operands to instead be promoted to type double or float?
Which one? Floats? Or doubles? Or maybe decimals? Or longs? There's no way for the compiler to know what you are thinking. Also type information generally flows out of expressions not into them, so it can't use the target of the assignment to choose either.
The fix is to simply specify which type you want by casting one or both of the arguments to that type.
The compiler doesn't consider what you do with the result when it determines the result type of an expression. The rules for how types are promoted in an expression only consider the values in the expression itself, not what you do with the value later on.
In the case where you assign the result to a variable, it could be possible to use that information, but consider a statement like this:
Console.Write(L + UL);
The Write method has overloads that take several different data types, which would make it rather complicated to decide how to use that information.
For example, there is an overload that takes a string, so one possible way to promote the types (and a good candidate as it doesn't lose any precision) would be to first convert both values to strings and then concatenate them, which is probably not the result that you were after.
Simple answer is that's just the way the language spec is written:
http://msdn.microsoft.com/en-us/library/y5b434w4(v=VS.80).aspx
You can argue over whether the rules of implicit conversions are logical in each case, but at the end of the day these are just the rules the design committee decided on.
Any implicit conversion has a downside in that it's doing something the programmer may not expect. The general principal with c# seems to be to error in these cases rather then try to guess what the programmer meant.
Suppose one variable was equal to 9223372036854775807 and the other was equal to -9223372036854775806? What should the result of the addition be? Converting the two values to double would round them to 9223372036854775808 and -9223372036854775808, respectively; performing the subtraction would then yield 0.0 (exactly). By contrast, if both values were signed, the result would be 1.0 (also exact). It would be possible to convert both operands to type Decimal and do the math exactly. Conversion to Double after the fact would require an explicit cast, however.

Why does (string)int32 always throw: Cannot convert type 'int' to 'string'

Why does (string)int32 always throw: Cannot convert type 'int' to 'string'
public class Foo
{
private int FooID;
public Foo()
{
FooID = 4;
string s = (string)FooID; //throws compile error
string sss = FooID.ToString(); //no compile error
}
}
Because there is no type conversion defined from Int32 to string. That's what the ToString method is for.
If you did this:
string s = (string)70;
What would you expect to be in s?
A. "70" the number written the way humans would read it.
B. "+70" the number written with a positive indicator in front.
C. "F" the character represented by ASCII code 70.
D. "\x00\x00\x00F" the four bytes of an int each separately converted to their ASCII representation.
E. "\x0000F" the int split into two sets of two bytes each representing a Unicode character.
F. "1000110" the binary representation for 70.
G. "$70" the integer converted to a currency
H. Something else.
The compiler can't tell so it makes you do it the long way.
There are two "long ways". The first is to use one of the the Convert.ToString() overloads something like this:
string s = Convert.ToString(-70, 10);
This means that it will convert the number to a string using base 10 notation. If the number is negative it displays a "-" at the start, otherwise it just shows the number. However if you convert to Binary, Octal or Hexadecimal, negative numbers are displayed in twos complement so Convert.ToString(-7, 16) becomes "ffffffba".
The other "long way" is to use ToString with a string formatter like this:
string s2 = 70.ToString("D");
The D is a formatter code and tells the ToString method how to convert to a string. Some of the interesting codes are listed below:
"D" Decimal format which is digits 0-9 with a "-" at the start if required. E.g. -70 becomes "-70".
"D8" I've shown 8 but could be any number. The same as decimal, but it pads with zeros to the required length. E.g. -70 becomes "-00000070".
"N" Thousand separators are inserted and ".00" is added at the end. E.g. -1000 becomes "-1,000.00".
"C" A currency symbol is added at the start after the "-" then it is the same as "N". E.g. Using en-Gb culture -1000 becomes "-£1,000.00".
"X" Hexadecimal format. E.g. -70 becomes "46".
Note: These formats are dependent upon the current culture settings so if you are using en-Us you will get a "$" instead of a "£" when using format code "C".
For more information on format codes see MSDN - Standard Numeric Format Strings.
When performing a cast operation like (string)30, you're basically telling the computer that "this object is similar enough to the object I'm casting to that I won't lose much (or any) data/functionality".
When casting a number to a string, there is a "loss" in data. A string can not perform mathematical operations, and it can't do number related things.
That is why you can do this:
int test1 = 34;
double test2 = (double)test1;
But not this:
int test1 = 34;
string test2 = (string)test1;
This example isn't perfect since (IIRC) the conversion here is implicit, however the idea is that when converting to a double, you don't lose any information. That data type can still basically act the same whether it's a double or an int. The same can't be said of an int to a string.
Casting is (usually) only allowed when you won't lose much functionality after the conversion is done.
The .ToString() method is different from casting because it's just a method that returns a string data type.
Just another note: In general, if you want to do an explicit conversion like this (and I agree with many other answers here as to why it needs to be an explicit conversion) don't overlook the Convert type. It is designed for these sorts of primitive/simple type conversions.
I know that when starting in C# (and coming from C++) I found myself running into type casts that seemed like they should have worked. C++ is just a bit more liberal when it comes to this sort of thing, so in C# the designers wanted you to know when your type conversion were ambiguous or ill-advised. The Convert type then fills in the gaps by allowing you to explicitly convert and understand the side-effects.
Int doesn't have an explicit operator to string. However, they have Int32.ToString();. Maybe you can create one with Extension methods, if you really want to.
Because C# does not know how to convert int to string, the library (.ToString) does.
Int32 can't be casted to string because C# compiler doesn't know how to converter from one type to another.
Why is that, you will ask the reason is simple int32 is type integer/numerical and string is, oh well type string/characters. You may be able to convert from numerical type to another numerical type (ex. float to int, but be warned possible loss of accuracy).
If all possible casts where coded in the compiler the compiler would become too slow and certainly would miss much of the possible casts created for user defined types which is a major NO NO.
So the workaround is that any object in C# has a function inherited .ToString() that knows how to handle every type because .ToString() is specifically coded for the specific type, and as you guessed by now returns a string.
After all Int32 type is some bits in memory (32 bits to be exact) and string type is another pile of bits (how many bits depends on how much has been allocated) the compiler/runtime doesn't know anything just by itself looking at that data until now. .ToString() function accesses the specific metadata needed to convert from one to the other.
I just found the answer. ToString works fine on an int variable. You just have to make sure you add the brackets.
ie...
int intMyInt=32;
string strMyString = intMyInt.ToString();
VB .net is perfectly capable of casting from an int to string... in fact so is Java. Why should C# have a problem with it? Sometimes it is necessary to use a number value in calculations and then display the result in a string format. How else would you manage to display the result of a calculation in a TextBox?
The responses given here make no sense. There must be a way to cast an int as a string.

Categories