Clamping and rounding a value during implicit conversion - c#

I've developer a custom integral type. Here it is its definition in C#.
public struct PitchClass
{
private readonly int value;
private PitchClass(int value)
{
this.value = CanonicalModulus.Calculate(value, 12);
}
public static implicit operator PitchClass(int value)
{
return new PitchClass(value);
}
public static implicit operator PitchClass(double value)
{
return new PitchClass((int)Math.Round(value));
}
public static implicit operator int(PitchClass pitchClass)
{
return pitchClass.value;
}
}
The PitchClass is an int whose values are in the range [0, 11].
As you can read from the C# code both int and double values can be implicitly converted to PitchClass using a canonical modulus operator:
PitchClass pitchClass = -3;
Console.WriteLine(pitchClass); // 9
The double value is also rounded during implicit conversion:
PitchClass pitchClass = -3.4d;
Console.WriteLine(pitchClass); // 9
I couldn't find other examples of custom data types that do so many things to the data type to convert.
Why? Is it bad practice? If so, is there another way to avoid doing argument validation for every PitchClass variable in every method?
Thanks

It is not bad practice to create a base type and make it convertible to other base data types. Neither is it to define implicit and explicit conversions.
Look at the implementation of Int32 in the .Net Framework. This structure implements many interfaces to make it Convertible to other structure types, to format it nicely and a few other stuff.
If you intend on heavily using this structure, implementing IConvertible, IComparable, IEquatable (and the methods GetHashCode() & Equals()) is a good idea, because almost all the native data types do so.
The Complex type is given as example of Custom data type in the explanation for IConvertible interface and they do many different conversions to and from other types.
Also, the explicit conversion from double to int is doing kind of the same thing you do, making this conversion narrowing (may incur data loss).

Related

Can implicit conversion be used to satisfy supertype argument?

I have an interface PropertyReference, an implementation Literal and an implicit conversion operator from int to Literal. However, whenever I try to use an int where a PropertyReference is expected the compiler complains cannot convert from int to PropertyReference.
The compiler does not let me add a conversion operator in PropertyReference because Interfaces cannot contain conversion, equality or inequality operators. I have a PropertyReferenceExtension static class and cannot put the conversion operator there because static classes cannot contain user-defined operators.
Is there a way to perform an implicit conversion to a subtype to match a supertype or must the conversion be explicit? PropertyReference and Literal are in the same namespace and the class where the conversion is being tried is already using FooBarNamespace
The statement that fails to compile.
Assert.IsTrue(BigDouble.Equals(Min.Of(1,2,3), 1));
Signature for Min.Of
public static Min Of(params PropertyReference[] children)
Signature for implicit conversion
public static implicit operator Literal(int value) => new Literal(new BigDouble(value));
Based on https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/conversions#processing-of-user-defined-implicit-conversions. An implicit conversion will only be found by the compiler if it exists in either the source type, or destination type, or one of their base types.
Since you are trying to convert from an int to a PropertyReference, only those types can implement the implicit conversion. But PropertyReference is an interface, which is explicitly not allowed. Though this does seem like a rather arbitrary restriction.
If you were to convert PropertyReference to an abstract base class, you could define an implicit operator that converts an int to a PropertyReference by creating a Literal.
Add an overload like below?
public static Min Of(params PropertyReference[] children)
{
}
public static Min Of(params Literal[] children)
{
return Of((PropertyReference[]) children);
}

Why do we need to type cast an enum in C#

I have an enum like:
public enum Test:int
{
A=1,
B=2
}
So here I know my enum is an int type but if I want to do something like following:
int a = Test.A;
this doesn't work.
If I have a class like:
public class MyTest
{
public static int A =1;
}
I can say ,
int a = MyTest.A;
Here I don't need to cast A to int explicitly.
So here I know my enum is an int type
No, it's not. It has an underlying type of int, but it's a separate type. Heck, that's half the point of having enums in the first place - that you can keep the types separate.
When you want to convert between an enum value and its numeric equivalent, you cast - it's not that painful, and it keeps your code cleaner in terms of type safety. Basically it's one of those things where the rarity of it being the right thing to do makes it appropriate to make it explicit.
EDIT: One oddity that you should be aware of is that there is an implicit conversion from the constant value 0 to the enum type:
Test foo = 0;
In fact, in the MS implementation, it can be any kind of constant 0:
Test surprise = 0.0;
That's a bug, but one which it's too late to fix :)
I believe the rest for this implicit conversion was to make it simpler to check whether any bits are set in a flags enum, and other comparisons which would use "the 0 value". Personally I'm not a fan of that decision, but it's worth at least being aware of it.
"The underlying type specifies how much storage is allocated for each enumerator. However, an explicit cast is needed to convert from enum type to an integral type".
With your updated example:
public class MyTest
{
public static int A =1;
}
And usage:
int a = MyTest.A;
That's not how enums look. Enums look more like (comments are places where we differ from a real enum):
public struct MyTest /* Of course, this isn't correct, because we'll inherit from System.ValueType. An enum should inherit from System.Enum */
{
private int _value; /* Should be marked to be treated specially */
private MyTest(int value) /* Doesn't need to exist, since there's some CLR fiddling */
{
_value = value;
}
public static explicit operator int(MyTest value) /* CLR provides conversions automatically */
{
return value._value;
}
public static explicit operator MyTest(int value) /* CLR provides conversions automatically */
{
return new MyTest(value);
}
public static readonly MyTest A = new MyTest(1); /* Should be const, not readonly, but we can't do a const of a custom type in C#. Also, is magically implicitly converted without calling a constructor */
public static readonly MyTest B = new MyTest(2); /* Ditto */
}
Yes, you can easily get to the "underlying" int value, but the values of A and B are still strongly typed as being of type MyTest. This makes sure you don't accidentally use them in places where they're not appropriate.
The enum values are not of int type. int is the base type of the enum. The enums are technically ints but logically (from the perspective of the C# language) not. int (System.Int32) is the base type of all enums by default, if you don't explicitly specify another one.
You enum is of type Test. It is not int just because your enum has integers values.
You can cast your enum to get the int value:
int a = (int) Test.A;

Explicit & Implicit Operator with Numeric Types & unexpected results

I have never done any extensive work with overloading operators, especially the implicit and explicit conversions.
However, I have several numeric parameters that are used frequently, so I am creating a struct as a wrapper around a numeric type to strongly type these parameters. Here's an example implementation:
public struct Parameter
{
private Byte _value;
public Byte Value { get { return _value; } }
public Parameter(Byte value)
{
_value = value;
}
// other methods (GetHashCode, Equals, ToString, etc)
public static implicit operator Byte(Parameter value)
{
return value._value;
}
public static implicit operator Parameter(Byte value)
{
return new Parameter(value);
}
public static explicit operator Int16(Parameter value)
{
return value._value;
}
public static explicit operator Parameter(Int16 value)
{
return new Parameter((Byte)value);
}
}
As i was experimenting with my test implementation to get a hang of the explicit and implicit operators, I tried to explicitly cast a Int64 to my Parameter type and to my surprised it did not throw an exception, and even more surprising, it just truncated the number and moved on. I tried excluding the custom explicit operator and it still behaved the same.
public void TestCast()
{
try
{
var i = 12000000146;
var p = (Parameter)i;
var d = (Double)p;
Console.WriteLine(i); //Writes 12000000146
Console.WriteLine(p); //Writes 146
Console.WriteLine(d); //Writes 146
}
catch (Exception ex)
{
Console.WriteLine(ex.Message); //Code not reached
}
}
So I repeated my experiment with a plain Byte in place of my struct and has the same exact behavior, so obviously this is expected behavior, but I thought an explicit cast that results in a lose of data would throw an exception.
When the compiler is analyzing an explicit user-defined conversion it is allowed to put an explicit built-in conversion on "either side" (or both) of the conversion. So, for example, if you have a user-defined conversion from int to Fred, and you have:
int? x = whatever;
Fred f = (Fred)x;
then the compiler reasons "there is an explicit conversion from int to Fred, so I can make an explicit conversion from int? to int, and then convert int to Fred.
In your example, there is a built-in explicit conversion from long to short, and there is a user-defined explicit conversion from short to Parameter, so converting long to Parameter is legal.
The same is true of implicit conversions; the compiler may insert built-in implicit conversions on either side of a user-defined implicit conversion.
The compiler never chains two user defined conversions.
Building your own explicit conversions correctly is a difficult task in C#, and I encourage you to stop attempting to do so until you have a thorough and deep understanding of the entire chapter of the specification that covers conversions.
For some interesting aspects of chained conversions, see my articles on the subject:
Chained user-defined explicit conversions in C#
Chained user-defined explicit conversions in C#, Part Two
Chained user-defined explicit conversions in C#, Part Three
This goal:
so I am creating a struct as a wrapper around a numeric type to strongly type these parameters
And this code:
public static implicit operator Byte(Parameter value)
{
return value._value;
}
public static implicit operator Parameter(Byte value)
{
return new Parameter(value);
}
Are in total contradiction. By adding 2-way implicit operators you annul any type-safety the wrapper might bring.
So drop the implicit conversions. You can change them to explicit ones.

Overloading the "Set to equal to" operator

I was reading a Business Primitives by CodeBetter.com and was toying around with the idea.
Taking his example of Money, how would one implement this in a way that it can be used similarily as regular value types?
What I mean by that is do this:
Money myMoney = 100.00m;
Instead of:
Money myMoney = new Money(100.00m);
I understand how to override all the operators to allow for functionality doing math etc, but I don't know what needs to be overriden to allow what I'm trying to do.
The idea of this is to minimize code changes required when implementing the new type, and to keep the same idea that it is a primitive type, just with a different value type name and business logic functionality.
Ideally I would have inherited Integer/Float/Decimal or whatever required, and override as needed, however obviously that is not available to structures.
You could provide an implicit cast operator from decimal to Money like so:
class Money {
public decimal Amount { get; set; }
public Money(decimal amount) {
Amount = amount;
}
public static implicit operator Money(decimal amount) {
return new Money(amount);
}
}
Usage:
Money money = 100m;
Console.WriteLine(money.Amount);
Now, what is happening here is not that we are overloading the assignment operator; that is not possible in C#. Instead, what we are doing is providing an operator that can implicitly cast a decimal to Money when necessary. Here we are trying to assign the decimal literal 100m to an instance of type Money. Then, behind the scenes, the compiler will invoke the implicit cast operator that we defined and use that to assign the result of the cast to the instance money of Money. If you want to understand the mechanisms of this, read §7.16.1 and §6.1 of the C# 3.0 specification.
Please note that types that model money should be decimal under-the-hood as I have shown above.
Please go through to get more clarity about this
http://www.csharphelp.com/2006/03/c-operator-overloading/
The assignment operator cannot be defined in C#, but for this kind of assignment you can overload the implicit cast operator:
(inside Money class)
public static implicit operator Money(double value)
{
return new Money(value);
}
Note: I recommend using decimal for accurate money calculation

Is there a C# generic constraint for "real number" types? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
C# generic constraint for only integers
Greets!
I'm attempting to set up a Cartesian coordinate system in C#, but I don't want to restrict myself to any one numerical type for my coordinate values. Sometimes they could be integers, and other times they could be rational numbers, depending on context.
This screams "generic class" to me, but I'm stumped as to how to constrict the type to both integrals and floating points. I can't seem to find a class that covers any concept of real numbers...
public class Point<T> where T : [SomeClassThatIncludesBothIntsandFloats?] {
T myX, myY;
public Point(T x, T y) {
myX = x;
myY = y;
}
}
Point<int> pInt = new Point<int>(5, -10);
Point<float> pFloat = new Point<float>(3.14159, -0.2357);
If I want this level of freedom, am I electing for a "typeof(T)" nightmare when it comes to calculations inside my classes, weeding out bools, strings, objects, etc? Or worse, am I electing to make a class for each type of number I want to work with, each with the same internal math formulae?
Any help would be appreciated. Thanks!
You can't define such a constraint, but you could check the type at runtime. That won't help you for doing calculations though.
If you want to do calculations, something like this would be an option:
class Calculations<T, S> where S: Calculator<T>, new()
{
Calculator<T> _calculator = new S();
public T Square(T a)
{
return _calculator.Multiply(a, a);
}
}
abstract class Calculator<T>
{
public abstract T Multiply(T a, T b);
}
class IntCalculator : Calculator<int>
{
public override int Multiply(int a, int b)
{
return a * b;
}
}
Likewise, define a FloatCalculator and any operations you need. It's not particularly fast, though faster than the C# 4.0 dynamic construct.
var calc = new Calculations<int, IntCalculator>();
var result = calc.Square(10);
A side-effect is that you will only be able to instantiate Calculator if the type you pass to it has a matching Calculator<T> implementation, so you don't have to do runtime type checking.
This is basically what Hejlsberg was referring to in this interview where the issue is discussed. Personally I would still like to see some kind of base type :)
This is a very common question; if you are using .NET 3.5, there is a lot of support for this in MiscUtil, via the Operator class, which supports inbuilt types and any custom types with operators (including "lifted" operators); in particular, this allows use with generics, for example:
public static T Sum<T>(this IEnumerable<T> source) {
T sum = Operator<T>.Zero;
foreach (T value in source) {
if (value != null) {
sum = Operator.Add(sum, value);
}
}
return sum;
}
Or for another example; Complex<T>
This is a known problem, since none of the arithmetic classes arrive from the same class. So you cannot restrict it.
The only thing you could do is
where T : struct
but thats not exactly what you want.
Here is a link to the specific issue.
Arithmetic types like int,double,decimal should implement IArithmetic<T>
You actually can do this, although the solution is tedious to set up, and can be confusing to devs who are not aware of why it was done. (so if you elect to do it document it thououghly!)...
Create two structs, called say, MyInt, and MyDecimal which act as facades to the CTS Int32, and Decimal core types (They contain an internal field of that respective type.) Each should have a ctor that takes an instance of the Core CTS type as input parameter..
Make each one implement an empty interface called INumeric
Then, in your generic methods, make the constraint based upon this interface.
Downside, everywhere you want to use these methods you have to construct an instance of the appropriate custom type instead of the Core CTS type, and pass the custom type to the method.
NOTE: coding the custom structs to properly emulate all the behavior of the core CTS types is the tedious part... You have to implement several built-in CLR interfaces (IComparable, etc.) and overload all the arithmetic, and boolean operators...
You can get closer with implementing few more
public class Point<T> where T : struct, IComparable, IFormattable, IConvertible,
IComparable<T>, IEquatable<T> {
}
The signature conforms to DateTime too. I'm not sure if you will be able to specify more types from the framework. Anyway this only solves part of the problem. To do basic numeric operations you will have to wrap your numeric types and use generic methods instead of standard operators. See this SO question for a few options.
This might be helpful. You have to use a generic class to achieve what you want.
C# doesn't currently allow type constraints on value types. i asked a related question not too long ago.
Enum type constraints in C#
Would this not lend itself to having seperate classes implementing IPoint?
Something like:
public interface IPoint<T>
{
T X { get; set; }
T Y { get; set; }
}
public class IntegerPoint : IPoint<int>
{
public int X { get; set; }
public int Y { get; set; }
}
As the calculations will have to differ in each implementation anyway right?
Dan#

Categories