This question already has answers here:
C# casting double to float error [duplicate]
(2 answers)
Closed 3 years ago.
I've just started working with C# and Visual Studio for college, and I'm struggling with using the Math.Ceiling in order to have a float value round up to the next integer before it's outputted.
Visual Studio says I'm missing a cast, but I don't really know where. It's probably really simple, but being new I don't really know where to start.
The final line shown is where I've got a problem.
I could just do with someone telling me where I'm going wrong here.
I tried using a float.Parse around the Math.Ceiling but that doesn't work apparently
const float FencePanelWidth = 1.5f;
float GWidth;
float GLength;
float GPerimetre;
float FencePanelsNeed;
float FencePanelsNeed2;
Console.Write("");
Console.Write("");
GWidth = float.Parse(Console.ReadLine());
Console.Write("");
GLength = float.Parse(Console.ReadLine());
GPerimetre = (GLength * 2) + GWidth;
FencePanelsNeed = GPerimetre / FencePanelWidth;
FencePanelsNeed2 = Math.Ceiling(FencePanelsNeed);
If FencePanelsNeed was say 7.24, I'd want FencePanelsNeed2 to be 8.
The Math.Ceiling method has only two overloads:
public static decimal Ceiling (decimal d); - Docs.
public static double Ceiling (double a); - Docs.
In your case, it uses the second overload (because the passed float value gets casted to double, and therefore, returns a double.
What you should do is cast the returned value to int or float:
FencePanelsNeed2 = (int)Math.Ceiling(FencePanelsNeed); // Or:
//FencePanelsNeed2 = (float)Math.Ceiling(FencePanelsNeed);
If you cast it to an int, you might also declare your FencePanelsNeed2 as int instead of float.
Note that if FencePanelsNeed2 were declared as double, you wouldn't get that error in the first place because no cast would be needed. So, it only comes down to which type you want to use.
Just cast it to an int and add 1, will always work
using System;
public class Program
{
const float FencePanelWidth = 7.24f;
public static void Main()
{
var FencePanelsNeed2 = (int)FencePanelWidth < FencePanelWidth ? (int)FencePanelWidth + 1 : (int)FencePanelWidth;
Console.WriteLine(FencePanelsNeed2);
}
}
Try it for yourself.
Related
I am writing unit tests that verify calculations in a database and there is a lot of rounding and truncating and stuff that mean that sometimes figures are slightly off.
When verifying, I'm finding a lot of times when things will pass but say they fail - for instance, the figure will be 1 and I'm getting 0.999999
I mean, I could just round everything into an integer but since I'm using a lot of randomized samples, eventually i'm going to get something like this
10.5
10.4999999999
one is going to round to 10, the other will round to 11.
How should I solve this problem where I need something to be approximately correct?
Define a tolerance value (aka an 'epsilon' or 'delta'), for instance, 0.00001, and then use to compare the difference like so:
if (Math.Abs(a - b) < delta)
{
// Values are within specified tolerance of each other....
}
You could use Double.Epsilon but you would have to use a multiplying factor.
Better still, write an extension method to do the same. We have something like Assert.AreSimiliar(a,b) in our unit tests.
Microsoft's Assert.AreEqual() method has an overload that takes a delta: public static void AreEqual(double expected, double actual, double delta)
NUnit also provides an overload to their Assert.AreEqual() method that allows for a delta to be provided.
You could provide a function that includes a parameter for an acceptable difference between two values. For example
// close is good for horseshoes, hand grenades, nuclear weapons, and doubles
static bool CloseEnoughForMe(double value1, double value2, double acceptableDifference)
{
return Math.Abs(value1 - value2) <= acceptableDifference;
}
And then call it
double value1 = 24.5;
double value2 = 24.4999;
bool equalValues = CloseEnoughForMe(value1, value2, 0.001);
If you wanted to be slightly professional about it, you could call the function ApproximatelyEquals or something along those lines.
static bool ApproximatelyEquals(this double value1, double value2, double acceptableDifference)
I haven't checked in which MS Test version were added but in v10.0.0.0 Assert.AreEqual methods have overloads what accept a delta parameter and do approximate comparison.
I.e.
https://msdn.microsoft.com/en-us/library/ms243458(v=vs.140).aspx
//
// Summary:
// Verifies that two specified doubles are equal, or within the specified accuracy
// of each other. The assertion fails if they are not within the specified accuracy
// of each other.
//
// Parameters:
// expected:
// The first double to compare. This is the double the unit test expects.
//
// actual:
// The second double to compare. This is the double the unit test produced.
//
// delta:
// The required accuracy. The assertion will fail only if expected is different
// from actual by more than delta.
//
// Exceptions:
// Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException:
// expected is different from actual by more than delta.
public static void AreEqual(double expected, double actual, double delta);
In NUnit, I like the clarity of this form:
double expected = 10.5;
double actual = 10.499999999;
double tolerance = 0.001;
Assert.That(actual, Is.EqualTo(expected).Within(tolerance));
One way to compare floating point numbers is to compare how many floating point representations that separate them. This solution is indifferent to the size of the numbers and thus you don't have to worry about the size of "epsilon" mentioned in other answers.
A description of the algorithm can be found here (the AlmostEqual2sComplement function in the end) and here is my C# version of it.
UPDATE:
The provided link is outdated. The new version which includes some improvements and bugfixes is here
public static class DoubleComparerExtensions
{
public static bool AlmostEquals(this double left, double right, long representationTolerance)
{
long leftAsBits = left.ToBits2Complement();
long rightAsBits = right.ToBits2Complement();
long floatingPointRepresentationsDiff = Math.Abs(leftAsBits - rightAsBits);
return (floatingPointRepresentationsDiff <= representationTolerance);
}
private static unsafe long ToBits2Complement(this double value)
{
double* valueAsDoublePtr = &value;
long* valueAsLongPtr = (long*)valueAsDoublePtr;
long valueAsLong = *valueAsLongPtr;
return valueAsLong < 0
? (long)(0x8000000000000000 - (ulong)valueAsLong)
: valueAsLong;
}
}
If you'd like to compare floats, change all double to float, all long to int and 0x8000000000000000 to 0x80000000.
With the representationTolerance parameter you can specify how big an error is tolerated. A higher value means a larger error is accepted. I normally use the value 10 as default.
The question was asking how to assert something was almost equal in unit testing. You assert something is almost equal by using the built-in Assert.AreEqual function. For example:
Assert.AreEqual(expected: 3.5, actual : 3.4999999, delta:0.1);
This test will pass. Problem solved and without having to write your own function!
FluentAssertions provides this functionality in a way that is perhaps clearer to the reader.
result.Should().BeApproximately(expectedResult, 0.01m);
I created a validation function as below:
public static T getAsDigit<T>( this Textbox tb, float min, float max ){
}
Most of the time, the validation range is specificed in integers. It works fine. But when I try to pass in decimals, it give me error sth like can't convert double to float, I have to change defination to double.
I am new to C#, how can I pass in the digits as float ? without doing sth unintuitive like Convert.toFloat('1.3').
My use case only requires 3 decimal place precision, value range 0.000 ~ 10.000. Is there any disadvantage in using float versus double in C#? Since I used and saw people use float a lot in sql when decimal() is optional.
Use f literal: getAsDigit(1.34f)
Or cast value to float getAsDigit((float)1.34)
You have to convert the double to flow so i recommend you do
float.Parse(x)
the cleaner option would be to create a new var and convert it to float there instead of in the injection so something like this:
double x = 1.3;
var newFloat = float.Parse(x);
I think you want to write a validation for the value of the TextBox.
You can upgrade your method to make it generic for all value type struct
public static T getAsDigit<T>(this TextBox tb, T min, T max) where T : struct, IComparable<T>
{
var valueConverted = default(T);
try
{
valueConverted = (T)Convert.ChangeType(tb.Text, typeof(T));
}
catch(Exception e)
{
//do something you want, rethown i.e
}
if (valueConverted.CompareTo(max) > 0)
return max;
if (valueConverted.CompareTo(min) < 0)
return min;
return valueConverted;
}
And you can simply pass the type you want.
string a = "10.5"; // suppose that a is TextBox.Text
var b = a.getAsDigit<float>(10,11); // return 10.5f
var c = a.getAsDigit<decimal>(11,12); //return 11d
var d = a.getAsDigit<double>(9,10); //return 10d
I'm developing a class library with C#, .NET Framework 4.7.1 and Visual Studio 2017 version 15.6.3.
I have this code:
public static T Add<T, K>(T x, K y)
{
dynamic dx = x, dy = y;
return dx + dy;
}
If I use with this code:
genotype[index] = (T)_random.Next(1, Add<T, int>(_maxGeneValue, 1));
genotype is a T[].
I get the error:
Argument 2: cannot convert from 'T' to 'int'
But if I change the code:
public static K Add<T, K>(T x, K y)
{
dynamic dx = x, dy = y;
return dx + dy;
}
I get the error:
CS0030 Cannot convert type 'int' to 'T'
When I get this error T is a byte.
How can I fix this error? Or maybe I can change random.Next to avoid doing the Add.
The minimal code:
public class MyClass<T>
{
private T[] genotype;
private T _maxGeneValue;
public void MinimalCode()
{
Random _random = new Random();
genotype = new T[] {0, 0, 0};
int index = 0;
_maxGeneValue = 9;
genotype[index] = (T)_random.Next(1, Add<T, int>(_maxGeneValue, 1));
}
}
It is a generic class because the genotype array can be of bytes or integers or floats. And I use the random to generate random values between 1 and _maxGeneValue. I need to add one because the maxValue in Random is exclusive.
The elements in the genotype array could any of the built-in types, as far as I know numerics. And I don't want to create a class for each of the types I'm going to use (and use the biggest one in array declaration, i.e. long[], is a waste of space).
T is a generic class because the genotype array can be of bytes or integers or floats
That is not what generic means. Generic means that there is an unbound number of valid types. You only have 3, that does not qualify for a generic solution at all.
A major lesson learned here is: if the type system fights against you, stop and think it over, you are probably doing something wrong; the cast to dynamic should have been a huge red flag.
The solution to your problem is to simply overload. You have 3 valid types: byte, int and float. I'm guessing you want to avoid losing information so the following should hold:
If adding integral types, use the "smallest" possible type (this can be potentially "unsafe" in the sense that it can overflow).
If one of the operands is a float, use floating point arithmetics.
OK, so we need:
public byte Add(byte left, byte right) { .... }
public int Add(int left, int right) { .... }
public float Add(float left, float right) { .... }
And now think about something else: do you want the sum of two bytes to overflow? Or do you want to return an int? What I mean is: what should byte 250 + byte 10 return? A byte or an int? If its an int, remove the first overload. Now think about the same issue with ints. If you don't want any overflows, then remove the second overload too.
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 5 years ago.
I have the following using MathNet library where child1 is -4.09 and child2 is -4.162. The result after Expression.Real((double1 - double2)) returns 0.072000000000000064. It should be 0.072. Can anyone please help me understand what is going on?
private static Expression GetSimplifiedExpression(Expression child1, Expression child2, LaTeXTokenType tokenType)
{
double double1 = Evaluate.Evaluate(null, child1).RealValue;
double double2 = Evaluate.Evaluate(null, child2).RealValue;
return Expression.Real((double1 - double2));
}
First, let's convert decimal to binary:
-4.09 = -100.00010111000010100011110101110000101000111101011100001010001111...
-4.162 = -100.00101001011110001101010011111101111100111011011001000101101000...
Then, subtract those two binaries. The result in binary is:
0.00010010011011101001011110001101010011111101111100111011011001...
which is approximately equal to decimal 0.07199999999999999998.
It is not exactly 0.072000000000000064, but I think you could get the idea behind it. If you want the exact result, you could cast double to decimal:
var decimal1 = (decimal) double1;
var decimal2 = (decimal) double2;
I just discovered MathF (wonderful tool). Like any new tool, we're going through some... growing pains. I'm working on unit conversion script. Here's my code. I thought MathF.Pow requires two floats, in this case 10 and 6. But apparently that's frowned upon. Any ideas?
Mathf megagram;
void Start () {
megagram = Mathf.Pow(10,6);
Mathf.Pow returns a float while you are assigning the return value (float) to megagram (MathF type)
float megagram;
void Start ()
{
megagram = Mathf.Pow(10f,6f);
}
If you write 6 or 10, compiler thinks that you use Int32. For floats write f suffix - 6f, 10f
Also Mathf.Pow returns float, not a Mathf type