How to convert from decimal to double in Linq to Entity - c#

Suppose we have table T which has two columns A and B with float and money types respectively. I want to write a linq query like following T-SQL statement:
Select A, B, A * B as C
From SomeTable
Where C < 1000
I tried to cast like following
var list = (from row in model.Table
where ((decimal)row.A) * row.B < 1000
select new { A = row.A,
B = row.B ,
C = ((decimal)row.A) * row.B}
).ToList();
but it does not allow the cast operation. It throw an exception:
Casting to Decimal is not supported in Linq to Entity queries, because
the required precision and scale information cannot be inferred.
My question is how to convert double to decimal in Linq? I don't want to fetch data from database.
Update:
I notice the converting decimal to double works but reverse operation throws the exception. So,
Why can't we convert double to decimal? Does Sql server do the same mechanism in t-sql too? Doesn't it affect precision?

The difference between a float (double) and a decimal, is that a float is decimal precise. If you give the float a value of 10.123, then internally it could have a value 10.1229999999999, which is very near to 10.123, but not exactly.
A decimal with a precision of x decimals will always be accurate until the x-th decimal.
The designer of your database thought that type A didn't need decimal accuracy (or he was just careless). It is not meaningful to give the result of a calculation more precision than the input parameters.
If you really need to convert your result into a decimal, calculate your formula as float / double, and cast to decimal after AsEnumerable:
(I'm not very familiar with your syntax, so I'll use the extension method syntax)
var list = model.Table.Where(row => row.A * row.B < 1000)
.Select(row => new
{
A = row.A,
B = row.B,
})
.AsEnumerable()
.Select(row => new
{
A = row.A,
B = row.B,
C = (decimal)row.A * (decimal)row.B,
});
Meaning:
From my Table, take only rows that have values such that row.A * row.B
< 1000.
From each selected row, select the values from columns A and B.
Transfer those two values to local memory (= AsEnumerable),
for every transferred row create a new object with three properties:
A and B have the transferred values.
C gets the the product of the decimal values of transferred A and B

You can avoid AsEnumerable() explaining to Entity how many fractional digits you want.
var list = (from row in model.Table
where ((decimal)row.A) * row.B < 1000
select new { A = row.A,
B = row.B ,
C = (((decimal)((int)row.A)*100))/100) * row.B}
).ToList();

Related

Conversion overflows when divide decimal number

There is a property which is decimal I want to divide that on 1,000,000 to change the size of number to shorten one, I declare a variable which is decimal too. when I divide, it raises "conversion overflow"
my code like bellow :
PrmSales and Amt are both decimal,
var amt = (from p in db.FactTotalAmount
group p by p.FromDate into g
select new outputList
{
Amt= (g.Sum(x => x.PrmSales)/1000000)
}
the error is for type of the field in database that is smaller than the type of property in coding because of that I receive that Error.
thank you nilsk !

Return Standard Deviation of column values with other column value condition LINQ

I have a table, db.Men, that has three columns
nvarchar "Name"
int "Age"
nvarchar "Status"
The Status column has three values only: "Happy", "Normal", and "Bad".
I need to count the average and standard deviation of the ages of "Happy" or "Normal":
using System.Linq
var ctx = new DataClasses1DataContext();
double? avg = (from n in ctx.men
where n.status == "Happy"
select n.age).Average();
int? sum = (from n in ctx.men
where n.status == "Happy"
select n.age).Sum();
I computed the average and the sum. How can I compute the standard deviation in this conditional status?
Stick the Happy ages (or one for each Status) into a list, and then calculate standard deviation. This article as a good process for determining Standard Deviation:
Standard deviation of generic list?
so something like:
var happyAges = ctx.Men.Where(i=>i.status == "Happy")Select(i=>i.age);
var happySd = CalculateStdDev(happyAges);
Also, you could make that Standard Deviation method static, and an extension method to do it all in one request.
I did this and it worked for me:
int?[] array = (from a in ctx.men where a.status == "Happy" select a.age).ToArray();
double? AVG = array.Average();
double? SumOfSquaresOfDifferences = array.Select(val => (val - AVG) * (val - AVG)).Sum();
double? SD = Math.Sqrt(Convert.ToDouble(SumOfSquaresOfDifferences) / array.Length);

Linq data type comparison with Double

In my Linq query I have a where statement that looks like this
&& vio.Bows.Any(nw => nw.XCoordinate.Equals(currVio.XCoordinate)))
values are
nw.XCoordinate = 4056.48751252685
currVio.XCoordinate = 4056.488
Thus the statement of Equals is not working, what is the easiest way to round?
public double XCoordinate { get; set; }
You can use the usual way of comparing double for proximity by calculating the absolute difference, and comparing it to a small value:
Math.Abs(x - y) < 1E-8 // 1E-8 is 0.00000001
For example, you can use this approach in a LINQ query like this:
&& vio.Bows.Any(nw => Math.Abs(nw.XCoordinate-currVio.XCoordinate) < 0.001)
You could also use Math.Round i.e.:
double x = 4056.48751252685;
var newx = Math.Round(x,3);
So if you knew you always wanted 3 decimal places you could:
&& vio.Bows.Any(nw => Math.Round(nw.XCoordinate,3).Equals(math.Round(currVio.XCoordinate,3))))
You could go further and implement IEquetable, and override the equals function, determine which of the two values has the least number of decimal places, round both to that value and compare.

How can I add an accurate difference column to a DataTable?

I need to compare numeric data from two datatables with the same schema.
For example two tables with data such as
PKColumn | numCol | DecimalCol
will end up looking like this after the merge
PKColumn | numCol 1 | numCol 2 | numCol Diff | DecimalCol 1 | DecimalCol 2 | DecimalCol Diff
Initially, I just created the diff column as an expression col1-col2 but I can end up with unusual looking values
col1 col2 diff c1 c2 diff
12.8 14.6 -1.80000019 33.2 29.8 3.40000153
But what I want is:
col1 col2 diff c1 c2 diff
12.8 14.6 -1.8 33.2 29.8 3.4
So my current solution is to manually iterate through the rows and set the value using this method:
private static void SetDifference(DataRow dataRow, DataColumn numericColumn)
{
dynamic value1 = dataRow[numericColumn.Ordinal - 2];
dynamic value2 = dataRow[numericColumn.Ordinal - 1];
if (IsDbNullOrNullOrEmpty(value1) || IsDbNullOrNullOrEmpty(value2)) return;
//now find out the most decimals used and round to this value
string valueAsString = value1.ToString(CultureInfo.InvariantCulture);
int numOfDecimals = valueAsString.SkipWhile(c => c != '.').Skip(1).Count();
valueAsString = value2.ToString(CultureInfo.InvariantCulture);
numOfDecimals = System.Math.Max(numOfDecimals, valueAsString.SkipWhile(c => c != '.').Skip(1).Count());
double result = Convert.ToDouble(value1 - value2);
dataRow[numericColumn] = System.Math.Round(result, numOfDecimals);
}
But it feels clunky and not good for performance. Suggestions for improvements are welcome.
EDIT: changed column names from "int" to "num" to avoid confusion
EDIT: also, I don't always want to round to one decimal spot. I may have data like numA: 28 numB: 75.7999954 so I want a diff of: -47.7999954
If you have columns that store integer numbers then use an integer number type for them! Probably you used a single precision floating point type. The same holds for decimal types. Use a decimal column type for them and this rounding problem will vanish!
If you use a true decimal type (not float, single, real or double) you will not encounter rounding problems. I do not know which database you are using, but with SQL-Server the correct column type would be decimal.
UPDATE
Since you cannot change the column type, round it to the requested precision like this
result = Math.Round((c1-c2) * 10)/10; // One decimal
result = Math.Round((c1-c2) * 100)/100; // Two decimals
result = Math.Round((c1-c2) * 1000)/1000; // Three decimals
Math.Round(3.141592654 * 10000)/10000 ===> 3.1416
--
UPDATE
Try this, it should perform well in most cases
decimal result = (decimal)col1 - (decimal)col2;
Test
12.8f - 14.6f ===> -1.80000019
(decimal)12.8f - (decimal)14.6f ===> -1.8
Based on Olivier's comments I've updated my code to look like this:
if(numericColumn.DataType == typeof(int))
{
dataRow[numericColumn] = System.Math.Abs(value1 - value2);
}
else
{
dataRow[numericColumn] = Convert.ToDecimal(value1) - Convert.ToDecimal(value2);
}
Seems a lot cleaner and it gets the job done.
Thanks for the help.
You should do this in sql query using
ROUND(table1.IntCol 1 - table2.IntCol 2, 1)

Convert Sum to an Aggregate product expression

I have this expression:
group i by i.ItemId into g
select new
{
Id = g.Key,
Score = g.Sum(i => i.Score)
}).ToDictionary(o => o.Id, o => o.Score);
and instead of g.Sum I'd like to get the mathematical product using Aggregate.
To make sure it worked the same as .Sum (but as product) I tried make an Aggregate function that would just return the sum...
Score = g.Aggregate(0.0, (sum, nextItem) => sum + nextItem.Score.Value)
However, this does not give the same result as using .Sum. Any idas why?
nextItem.Score is of type double?.
public static class MyExtensions
{
public static double Product(this IEnumerable<double?> enumerable)
{
return enumerable
.Aggregate(1.0, (accumulator, current) => accumulator * current.Value);
}
}
The thing is that in your example you are starting the multiplication with 0.0 - A multiplication with zero yields zero, at the end the result will be zero.
Correct is to use the identity property of multiplication. While adding zero to a number leaves the number of unchanged, the same property holds true for a multiplication with 1. Hence, the correct way to start a product aggregate is to kick off multiplication wit the number 1.0.
If you aren't sure about initial value in your aggregate query and you don't acutally need one (like in this example) I would recommend you not to use it at all.
You can use Aggregate overload which doesn't take the initial value - http://msdn.microsoft.com/en-us/library/bb549218.aspx
Like this
int product = sequence.Aggregate((x, acc) => x * acc);
Which evaluates to item1 * (item2 * (item3 * ... * itemN)).
instead of
int product = sequence.Aggregate(1.0, (x, acc) => x * acc);
Which evaluates to 1.0 * (item1 * (item2 * (item3 * ... * itemN))).
//edit:
There is one important difference though. Former one does throw an InvalidOperationException when the input sequence is empty. Latter one returns seed value, therefore 1.0.

Categories