Round double in LINQ to Entities - c#

The Math.Round function isn't supported by LINQ to entities (I assume that means using LINQ with the entity framework dbset) but I really need to round the double in query to filter lots of rows according to the user input.
User input is 4 digits after point and doubles in db can be any size.
Is there a way to round double in the query?
UPDATE:
Basically user enters some number in filter of the table column. Lets say it's a weight of fruits.
So for example user enters weight as 4.2152 and the column must filter all the fruits whose weight is 4.21515 or 4.215212 etc.
And in the database there are many fruits which weight is like 4.21514543543643
RESULT
So after a day long struggle I decided to use the range condition. Although it's not quite a solution. If user enters 4.2152 then range filters with condition bigger then 4.21515. But it will filter out the 4.215149 value which would otherwise be rounded to 4.2152.
The problem is solved but not exactly as needed :(

Instead of trying to round the data on your server, try instead to use some boundaries.
For your example, you basically want all the fruits which weight between 4.2152 and 4.2153 .
Now, it will depend on your specific case, (do you always want a precision of 4 decimals ? What is the exact datatype do you use, etc...), so the exact algorithm is up to you.
But it will looks like this :
double lowerBound = userInput; // 4.2152
double precision = 0.0001;
double upperBound = userInput + precision;
var query = DbSet<Fruit>.Where(f => lowerBound >= f.Weight && f.Weight < upperBound);
Also keep in mind that floating-point arithmetic can surprise you, sometimes. Depending on your usecase, this + 0.0001 might not be exactly what you want.

Update answers:
You can try to use Canonical Functions.
It's have a lot of functions help you in Math.
Maybe it can help. Document
Old answers:
Try use .AsEnumerable() before Where clause.
It's should works.

Related

Number value from Science Notation to Decimal Notation

I have a lot of data in Science notation when I load this data into my double variable, everything works fine (in VS I see value 0.00000000022). But when I multiply this number by 1000000 I got unrounded value (0.00021999999999999998).
I must multiply this value because I use it for selection filter. After send data from selection filter I again divide this data to my raw format 0.00021999999999999998 / 1000000 = 0.00000000022.
2.20E-10 = 0.00000000022 * 1000000 = 0.00021999999999999998
Expected value is this:
0.00021999999999999998 => 0.00022
When I use a similar number, for example, 2.70E-10 I got after multiple value 0.00027 (In this case conversion work fine).
Values are converted only for use in the selection menu so that no unnecessary zeros are shown and the label indicates which unit they represent. (For this example from Ohm to microOhm in select box)
Is there any way to correctly convert these values?
I use LINQ for convert this values in like this:
var x = y.Select(s => s.Resistance * 1000000).Distinct();
problem is floating point numbers, a good article about this is link
use decimal type instead of double for fast solving :)

Culture-Based String Formatting For Decimal

I've got a decimal value, which is stored in my database (SQL Server 2008) with a precision of 13, and a scale of 10.
Examples:
10.6157894734
68.0750000000
96.8723684210
Basically, the numbers represent a "score" out of 100.
Now, i want to display/represent this value a different way, depending on the culture.
For example, in AU, i want to display the value out of 100, rounded up to 2 decimal places.
So in the above example:
10.62
68.08
96.87
But in US, i want to display the value out of 10, rounded up to 1 decimal place.
So in the above example:
1.1
6.8
9.7
Can this be done with a resource-file, e.g doing something like:
return score.ToString(Resources.Global.ScoreFormat);
Where it could be stored as "#.##" in en-US, but "#.#" in en-AU?
I'm pretty sure it can't, since i'm not only rounding, but transforming using Math? (e.g value / 10 for AU) But thought i'd ask the question.
I'm trying to avoid an ugly if statement which checks the current culture and does math/rounding manually.
You will need to put a modifier and a format string in your resources so that you can do something like
return (score * Resources.Global.ScoreModifier).ToString(Resources.Global.ScoreFormat);
Accepting Phil's answer, but this is the actual modifier/format for anyone who cares:
Code:
var formattedScore = (score / Convert.ToInt32(Global.ScoreModifier)).ToString(Global.ScoreFormat))
Global.resx
ScoreFormat n1
ScoreModifier 10
Global.en-au.resx
ScoreFormat n2
ScoreModifier 1
Seems to work..anyone spot a problem?

Division and rounding of BigInteger

I'm trying to create a collection which has decimals with precision rounded to 2 decimal places. However the collection have some huge numbers and I have to use the BigInteger as the solution.
The specifics:
- Got a collection which has BigIntegers
- Got another collection of BigIntegers
- Got a third collection of Big integers
- I have to create a collection which has average value of the above 3 collections, with values rounded to 2 decimal places.
i.e if collection1 has {2,3,4} and collection2 has {4,5,5} and collection3 has {5,3,2} I should create a 4th collection which has {3.67,3.67,,3.67}
For this I'm using this code:
BigInteger divisor = new BigInteger(3.0d);
var averages = collection1.Zip(collection2, BigInteger.Add)
.Zip(collection3,
(xy, z) => BigInteger.Divide(BigInteger.Add(xy,z), divisor));
However the decimals are not appearing. I'm not sure as to whether biginteger can hold only integer values and not decimal values.
Can you please suggest a solution for this?
Note: It has to be LINQ based as the collections are pretty huge with some big values(and hence biginteger).
Well you're not getting any decimal values because BigInteger only represents integers.
Is decimal big enough to hold the number you're interested in?
If not, you might want to consider multiplying everthing by 100, and fixing the formatting side such that "1500" is displayed as "15.00" etc. You'd still need to do a bit of work to end up with ".67" instead of ".66" for a two-thirds result, as that would be the natural result of the division when it's truncated instead of rounded.
The residue after division by three is going to either be .00, .33, or .67. You should be able to determine which of those three values is appropriate. I'm not sure how you will want to have your collection store things, however, given that the numerical types that support fractions won't be able to hold your result, unless you want to define a "big integer plus small fraction" type or store your numbers as strings.

Does rounding floats generate a reliable result in C#?

Given that I have two floats, a and b, and only care if they are "approximately equal", would something similar to the following work reliably, or would be it still be subject to precision issues?
eg:
Math.Round(a) == Math.Round(b)
Alternatively, is there a way to reliably round them to the nearest integer? If the above doesn't work, then I assume simply doing (int)Math.Round(a) won't be reliable either.
EDIT: I should have predicted I'd get answers like this, but I'm not trying to determine 'closeness' of the two values. Assuming the logic above is sound, will the above work, or is there a chance that I will get something like 3.0 == 3.0001?
Nothing of this kind will work. There is always a sharp boundary when very similar numbers will be rounded to different numbers.
In your first example think of a=0.4999999 and b=0.5000001. Very close but will round to different integers.
Another problem is that if the numbers are large rounding to an integer will have no effect at all. And even very close(relative) numbers will already an absolute difference >1.
If you care about determinism, you're out of luck with float and double. You just can't get those deterministic on .net. Use Decimal.
IF you really want it reliable then you will have to use Decimal instead of float... although that is much slower...
EDIT:
With ((int)Math.Round(a)) == ((int)Math.Round(b)) you avoid the problem 3.0 == 3.0001 BUT all the other pitfalls mentioned your post and in the answers will still apply...
You can complicate the logic to try to make it a bit more reliable (example see below which could be packaged nicely into some method) but it will never be really reliable...
// 1 = near, 2 = nearer, 3 = even nearer, 4 = nearest
int HowNear = 0;
if (((int)Math.Round(a)) == ((int)Math.Round(b)))
HowNear++;
if (((int)Math.Floor(a)) == ((int)Math.Floor(b)))
HowNear++;
if (((int)Math.Ceiling(a)) == ((int)Math.Ceiling(b)))
HowNear++;
if (Math.Round(a) == Math.Round(b))
HowNear++;
The correct way to convert from double/float to integers is:
(int)Math.Round(a)
having done this you will always get whole numbers which can be tested for equality. If you have numbers which you know are going to be virtually whole numbers (e.g. the result of 6.0/3.0) then this will work great. The recommended way of checking two numbers which are doubles/floats are approximately equal is:
Math.Abs(a-b)<tolerance
where tolerance is a double value that determines how similar they should be, for example, if you want them to be within 1 unit of each other you could use a tolerance of 1.0 which would give you similar accuracy to Math.Round and comparing the results, but is well behaved when you get two values which are very close to half way between two integers.
You could use Floor or Ceieling methods right to rounding of the values.
"approximately equal" is your answer actually, a pseudoexample:
double tollerance = 0.03;
if(Math.Abs(a-b)<=tollerance )
// these numbers are equal !
else
//non equal
EDIT
Or if you want to be more "precise":
int aint = (int)(a*100); // 100 is rounding tollerance
int bint = (int)(b*100); // 100 is rounding tollerance
and after
if(Math.Abs(aint -bint )<=tollerance ) // tollerance has to be integer in this case, obviously
// these numbers are equal !
else
//non equal

Why can't c# calculate exact values of mathematical functions

Why can't c# do any exact operations.
Math.Pow(Math.Sqrt(2.0),2) == 2.0000000000000004
I know how doubles work, I know where the rounding error is from, I know that it's almost the correct value, and I know that you can't store infinite numbers in a finite double. But why isn't there a way that c# can calculate it exactly, while my calculator can do it.
Edit
It's not about my calculator, I was just giving an example:
http://www.wolframalpha.com/input/?i=Sqrt%282.000000000000000000000000000000000000000000000000000000000000000000000000000000001%29%5E2
Cheers
Chances are your calculator can't do it exactly - but it's probably storing more information than it's displaying, so the error after squaring ends up outside the bounds of what's displayed. Either that, or its errors happen to cancel out in this case - but that's not the same as getting it exactly right in a deliberate way.
Another option is that the calculator is remembering the operations that resulted in the previous results, and applying algebra to cancel out the operations... that seems pretty unlikely though. .NET certainly won't try to do that - it will calculate the intermediate value (the root of two) and then square it.
If you think you can do any better, I suggest you try writing out the square root of two to (say) 50 decimal places, and then square it exactly. See whether you come out with exactly 2...
Your calculator is not calculating it exactly, it just that the rounding error is so small that it's not displayed.
I believe most calculators use binary-coded decimals, which is the equivalent of C#'s decimal type (and thus is entirely accurate). That is, each byte contains two digits of the number and maths is done via logarithms.
What makes you think your calculator can do it? It's almost certainly displaying less digits than it calculates with and you'd get the 'correct' result if you printed out your 2.0000000000000004 with only five fractional digits (for example).
I think you'll probably find that it can't. When I do the square root of 2 and then multiply that by itself, I get 1.999999998.
The square root of 2 is one of those annoying irrational numbers like PI and therefore can't be represented with normal IEEE754 doubles or even decimal types. To represent it exactly, you need a system capable of symbolic math where the value is stored as "the square root of two" so that subsequent calculations can deliver correct results.
The way calculators round up numbers vary from model to model. My TI Voyage 200 does algebra to simplify equations (among other things) but most calculators will display only a portion of the real value calculated, after applying a round function on the result. For example, you may find the square root of 2 and the calculator would store (let's say) 54 decimals, but will only display 12 rounded decimals. Thus when doing a square root of 2, then do a power of that result by 2 would return the same value since the result is rounded. In any case, unless the calculator can keep an infinite number of decimals, you'll always have a best approximate result from complexe operations.
By the way, try to represent 10.0 in binary and you'll realize that you can't represent it evenly and you'll end up with (something like) 10.00000000000..01
Your calculator has methods which recognize and manipulate irrational input values.
For example: 2^(1/2) is likely not evaluated to a number in the calculator if you do not explicitly tell it to do so (as in the ti89/92).
Additionally, the calculator has logic it can use to manipulate them such as x^(1/2) * y^(1/2) = (x*y)^1/2 where it can then wash, rinse, repeat the method for working with irrational values.
If you were to give c# some method to do this, I suppose it could as well. After all, algebraic solvers such as mathematica are not magical.
It has been mentioned before, but I think what you are looking for is a computer algebra system. Examples of these are Maxima and Mathematica, and they are designed solely to provide exact values to mathematical calculations, something not covered by the CPU.
The mathematical routines in languages like C# are designed for numerical calculations: it is expected that if you are doing calculations as a program you will have simplified it already, or you will only need a numerical result.
2.0000000000000004 and 2. are both represented as 10. in single precision. In your case, using single precision for C# should give the exact answer
For your other example, Wolfram Alpha may use higher precision than machine precision for calculation. This adds a big performance penalty. For instance, in Mathematica, going to higher precision makes calculations about 300 times slower
k = 1000000;
vec1 = RandomReal[1, k];
vec2 = SetPrecision[vec1, 20];
AbsoluteTiming[vec1^2;]
AbsoluteTiming[vec2^2;]
It's 0.01 second vs 3 seconds on my machine
You can see the difference in results using single precision and double precision introduced by doing something like the following in Java
public class Bits {
public static void main(String[] args) {
double a1=2.0;
float a2=(float)2.0;
double b1=Math.pow(Math.sqrt(a1),2);
float b2=(float)Math.pow(Math.sqrt(a2),2);
System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(a1)));
System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(a2)));
System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(b1)));
System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(b2)));
}
}
You can see that single precision result is exact, whereas double precision is off by one bit

Categories