I maded some code to compute the sine and cosine, but, the code is not so good, I want to know if is possible to make the code to compute the values with Linq.
that is my code to compute sine
var primes = PrimeNumbers(3, 15);
bool SumSub = false;
decimal seno = (decimal)(nGrau * nSeno);
foreach (var a in primes)
{
if (SumSub == false)
{
seno -= (decimal)Math.Pow(nGrau, (double)a) / Factorial(a);
SumSub = true;
}
else
{
seno += (decimal)Math.Pow(nGrau, (double)a) / Factorial(a);
SumSub = false;
}
}
Console.WriteLine(seno);
Is possible to make a code to compute the sine of degres using linq ?
Something like this, perhaps:
var sineResult = listDouble.Select((item, index) =>
new {i = (index%2)*2 - 1, o = item})
.Aggregate(seno, (result, b) =>
result - b.i * ((decimal)Math.Pow(nGrau, (double)b.o) / Factorial(b.o)));
The code
i = (index%2)*2 - 1
gives you alternating 1 and -1.
The Aggregate statement sums the values, mulitplying each value by either -1 or 1.
You could use Aggregate:
decimal seno = PrimeNumbers(3, 15)
.Aggregate(
new { sub = false, sum = (decimal)(nGrau * nSeno) },
(x, p) => new {
sub = !x.sub,
sum = x.sum + (x.sub ? 1 : -1) * (decimal)Math.Pow(nGrau, (double)p) / Factorial(p)
},
x => x.sum);
I didn't test that, but think it should work.
Btw. I don't think it's more readable or better then your solution. If I were you I would go with foreach loop, but improve it a little bit:
foreach (var a in primes)
{
seno += (SumSub ? 1 : -1) * (decimal)Math.Pow(nGrau, (double)a) / Factorial(a);
SumSub = !SumSub;
}
Here's a function that computes the adds up the first 10 terms of the Taylor series approximation of cosine:
var theta = 1.0m; // angle in radians
Enumerable.Range(1, 10).Aggregate(
new { term = 1.0m, accum = 0.0m },
(state, n) => new {
term = -state.term * theta * theta / (2 * n - 1) / (2 * n),
accum = state.accum + state.term},
state => state.accum)
See how it doesn't use an if, Power, or Factorial? The alternating signs are created simply by multiplying the last term by -1. Computing the ever-larger exponents and factorials on each term is not only expensive and results in loss of precision, it is also unnecessary.
To get x^2, x^4, x^6,... all you have to do is multiply each successive term by x^2. To get 1/1!, 1/3!, 1/5!,... all you have to do is divide each successive term by the next two numbers in the series. Start with 1; to get 1/3!, divide by 2 and then 3; to get 1/5! divide by 4 and then 5, and so on.
Note that I used the m prefix to denote decimal values because I'm assuming that you're trying to do your calculations in decimal for some reason (otherwise you would use Math.Cos).
Related
This is a fairly simple question, yet I am struggling to find an elegant solution for!
Suppose you want to iterate from 0.0 to 1.0 in discrete steps, you'll use a for loop that uses int since it's discouraged to use float due to precision errors.
The simplest approach is the following but it's flawed as it never outputs 1.0:
private void sample1(int steps = 100)
{
var floats = new float[steps]; // 100 values
for (var i = 0; i < steps; i++) // goes from 0 to 99
{
var f = 1.0f / steps * i; // will only go from 0.0 to 0.99, never reaches 1.0
floats[i] = f;
}
}
A first attempt to address the issue, it works but we end with one more value:
private void sample2(int steps = 100)
{
var n = steps + 1;
var floats = new float[n]; // 101 values now!
for (var i = 0; i < n; i++) // goes from 0 to 100
{
var f = 1.0f / n * i; // goes from 0.0 to 1.0 but now we have an extra value
floats[i] = f;
}
}
A second attempt to address the issue, we don't have an extra value but last is never really 1.0:
private void sample3(int steps = 100)
{
var floats = new float[steps]; // 100 values
for (var i = 0; i < steps; i++) // goes from 0 to 99
{
var f = 1.0f / steps * i * ((steps - 1.0f) / steps); // goes from 0.0 to 0.999...
floats[i] = f;
}
}
Question:
How does one properly loop from 0.0 to 1.0 as discrete steps in a for loop that uses int?
(pseudo-code is just fine)
Count out loud to 10. If you included zero in your counting, you actually spoke out eleven numbers, not 10. Since you want to include zero without making any progress towards your goal of 1.0f, the other steps-1 slices of 1.0f must sum up to 1.0.
Or to think about it from another angle:
To get 1 your numerator and denominator need to be equal. Since your for-loop runs from 0 to steps exclusive, or steps-1 inclusive, your denominator must be steps-1.
First, I would like to thank everyone involved in this magnificent project, Math.NET saved my life!
I have few questions about the linear and nonlinear regression, I am a civil engineer and when I was working on my Master's degree, I needed to develop a C# application that calculates the Rheological parameters of concrete based on data acquired from a test.
One of the models that describes the rheological behavior of concrete is the "Herschel-Bulkley model" and it has this formula :
y = T + K*x^n
x (the shear-rate), y (shear-stress) are the values obtained from the test, while T,K and N are the parameters I need to determine.
I know that the value of "T" is between 0 and Ymin (Ymin is the smallest data point from the test), so here is what I did:
Since it is nonlinear equation, I had to make it linear, like this :
ln(y-T) = ln(K) + n*ln(x)
creat an array of possible values of T, from 0 to Ymin, and try each value in the equation,
then through linear regression I find the values of K and N,
then calculate the SSD, and store the results in an array,
after I finish all the possible values of T, I see which one had the smallest SSD, and use it to find the optimal K and N .
This method works, but I feel it is not as smart or elegant as it should be, there must be a better way to do it, and I was hoping to find it here, it is also very slow.
here is the code that I used:
public static double HerschelBulkley(double shearRate, double tau0, double k, double n)
{
var t = tau0 + k * Math.Pow(shearRate, n);
return t;
}
public static (double Tau0, double K, double N, double DeltaMin, double RSquared) HerschelBulkleyModel(double[] shear, double[] shearRate, double step = 1000.0)
{
// Calculate the number values from 0.0 to Shear.Min;
var sm = (int) Math.Floor(shear.Min() * step);
// Populate the Array of Tau0 with the values from 0 to sm
var tau0Array = Enumerable.Range(0, sm).Select(t => t / step).ToArray();
var kArray = new double[sm];
var nArray = new double[sm];
var deltaArray = new double[sm];
var rSquaredArray = new double[sm];
var shearRateLn = shearRate.Select(s => Math.Log(s)).ToArray();
for (var i = 0; i < sm; i++)
{
var shearLn = shear.Select(s => Math.Log(s - tau0Array[i])).ToArray();
var param = Fit.Line(shearRateLn, shearLn);
kArray[i] = Math.Exp(param.Item1);
nArray[i] = param.Item2;
var shearHerschel = shearRate.Select(sr => HerschelBulkley(sr, tau0Array[i], kArray[i], nArray[i])).ToArray();
deltaArray[i] = Distance.SSD(shearHerschel, shear);
rSquaredArray[i] = GoodnessOfFit.RSquared(shearHerschel, shear);
}
var deltaMin = deltaArray.Min();
var index = Array.IndexOf(deltaArray, deltaMin);
var tau0 = tau0Array[index];
var k = kArray[index];
var n = nArray[index];
var rSquared = rSquaredArray[index];
return (tau0, k, n, deltaMin, rSquared);
}
I want to calculate all possible (using a certain step) distributions of a number of items. The sum has to add up to 1.
My first approach was the following:
var percentages = new List<double>(new double[3]);
while (Math.Abs(percentages.Last() - 1.0) > 0.01)
{
Increment(percentages, 0);
if (Math.Abs(percentages.Sum() - 1.0) < 0.01)
{
percentages.ForEach(x => Console.Write("{0}\t", x));
Console.WriteLine();
}
}
private void Increment(List<double> list, int i)
{
if (list.Count > i)
{
list[i] += 0.1;
if (list[i] >= 1)
{
list[i] = 0;
Increment(list, ++i);
}
}
}
Which outputs the wanted results:
1 0 0
0.9 0.1 0
0.8 0.2 0
0.7 0.3 0
0.6 0.4 0
0.5 0.5 0
0.4 0.6 0
0.3 0.7 0
0.2 0.8 0
0.1 0.9 0
0 1 0
0.9 0 0.1
..
I'm wondering how to speed up the calculation, as the number of items can become very large (>20).
Obviously I calculate a lot of distributions just to throw them away because they don't add up to 1.
Any ideas?
This works nicely for 3 sets of numbers:
var query =
from x in Enumerable.Range(0, 11)
from y in Enumerable.Range(0, 11 - x)
let z = 10 - x - y
select new [] { x / 10.0, y / 10.0, z / 10.0 };
var percentages = query.ToList();
percentages
.ForEach(ps => Console.WriteLine(String.Join("\t", ps)));
Here's a generalized version:
Func<int, int[], int[][]> generate = null;
generate = (n, ns) =>
n == 1
? new int[][]
{
ns
.Concat(new [] { 10 - ns.Sum() })
.ToArray()
}
: Enumerable
.Range(0, 11 - ns.Sum())
.Select(x =>
ns.Concat(new [] { x }).ToArray())
.SelectMany(xs => generate(n - 1, xs))
.ToArray();
var elements = 4;
var percentages =
generate(elements, new int[] { })
.Select(xs => xs.Select(x => x / 10.0).ToArray())
.ToList();
Just change the elements value to get the number of elements for the inner array.
At the risk of duplicating effort, here is a version that is fundamentally the same as the other answer. However, I already wrote it so I might as well share it. I'll also point out some differences that may or may not be important to the OP:
static void Main(string[] args)
{
Permute(1, 0.1m, new decimal[3], 0);
}
static void Permute(decimal maxValue, decimal increment, decimal[] values, int currentValueIndex)
{
if (currentValueIndex == values.Length - 1)
{
values[currentValueIndex] = maxValue;
Console.WriteLine(string.Join(", ", values));
return;
}
values[currentValueIndex] = 0;
while (values[currentValueIndex] <= maxValue)
{
Permute(maxValue - values[currentValueIndex], increment, values, currentValueIndex + 1);
values[currentValueIndex] += increment;
}
}
Notes:
I use the decimal type here. In this particular case, it avoids the need for epsilon-based checks as in the original code.
I prefer also using string.Join() rather than issuing multiple calls to Console.Write().
Also in this case the use of List<T> doesn't seem beneficial, so my implementation uses an array.
But I admit, its basic algorithm is the same.
I would turn this inside out. Keep track of the remainder, and only increment up until that remainder. You can also speed things up by setting the last element to the only value that can work. That way every combination that you look at is going to be printable.
If you organize things this way, then you will probably find it good to put the print inside the recursive function.
I don't program in C#, but it might look something like this:
var percentages = new List<double>(new double[3]);
PrintCombinations(percentages, 0, 1.0);
private void PrintCombinations(List <double> list, int i, double r) {
double x = 0.0;
if (list.Count > i + 1) {
while (x < r + 0.01) {
list[i] = x;
PrintCombinations(list, i+1, r-x);
}
}
else {
list[i] = r;
percentages.ForEach(x => Console.Write("{0}\t", x));
Console.WriteLine();
}
}
(Admittedly this does put the combinations in a different order. Fixing that is left as an exercise...)
If by 'distribution' you mean 'sum of 3 numbers in steps of 0.1 adding up to 1.0', how about this rather direct approach:
for (decimal i = 0; i< 1; i+=0.1m)
for (decimal j = 0; j < 1 - i; j+=0.1m)
{
Console.WriteLine(i + " " + j + " " + (1 - i - j) );
}
I found this code on the website http://rosettacode.org/wiki/Closest-pair_problem and I adopted the C# version of the divide and conquer method of finding the closest pair of points but what I am trying to do is adapt it for use to only find the closest point to one specific point. I have googled quite a bit and searched this website to find examples but none quite like this. I am not entirely sure what to change so that it only checks the list against one point rather than checking the list to find the two closest. I'd like to make my program operate as fast as possible because it could be searching a list of several thousand Points to find the closest to my current coordinate Point.
public class Segment
{
public Segment(PointF p1, PointF p2)
{
P1 = p1;
P2 = p2;
}
public readonly PointF P1;
public readonly PointF P2;
public float Length()
{
return (float)Math.Sqrt(LengthSquared());
}
public float LengthSquared()
{
return (P1.X - P2.X) * (P1.X - P2.X)
+ (P1.Y - P2.Y) * (P1.Y - P2.Y);
}
}
public static Segment Closest_BruteForce(List<PointF> points)
{
int n = points.Count;
var result = Enumerable.Range(0, n - 1)
.SelectMany(i => Enumerable.Range(i + 1, n - (i + 1))
.Select(j => new Segment(points[i], points[j])))
.OrderBy(seg => seg.LengthSquared())
.First();
return result;
}
public static Segment MyClosestDivide(List<PointF> points)
{
return MyClosestRec(points.OrderBy(p => p.X).ToList());
}
private static Segment MyClosestRec(List<PointF> pointsByX)
{
int count = pointsByX.Count;
if (count <= 4)
return Closest_BruteForce(pointsByX);
// left and right lists sorted by X, as order retained from full list
var leftByX = pointsByX.Take(count / 2).ToList();
var leftResult = MyClosestRec(leftByX);
var rightByX = pointsByX.Skip(count / 2).ToList();
var rightResult = MyClosestRec(rightByX);
var result = rightResult.Length() < leftResult.Length() ? rightResult : leftResult;
// There may be a shorter distance that crosses the divider
// Thus, extract all the points within result.Length either side
var midX = leftByX.Last().X;
var bandWidth = result.Length();
var inBandByX = pointsByX.Where(p => Math.Abs(midX - p.X) <= bandWidth);
// Sort by Y, so we can efficiently check for closer pairs
var inBandByY = inBandByX.OrderBy(p => p.Y).ToArray();
int iLast = inBandByY.Length - 1;
for (int i = 0; i < iLast; i++)
{
var pLower = inBandByY[i];
for (int j = i + 1; j <= iLast; j++)
{
var pUpper = inBandByY[j];
// Comparing each point to successivly increasing Y values
// Thus, can terminate as soon as deltaY is greater than best result
if ((pUpper.Y - pLower.Y) >= result.Length())
break;
Segment segment = new Segment(pLower, pUpper);
if (segment.Length() < result.Length())
result = segment;// new Segment(pLower, pUpper);
}
}
return result;
}
I used this code in my program to see the actual difference in speed and divide and conquer easily wins.
var randomizer = new Random(10);
var points = Enumerable.Range(0, 10000).Select(i => new PointF((float)randomizer.NextDouble(), (float)randomizer.NextDouble())).ToList();
Stopwatch sw = Stopwatch.StartNew();
var r1 = Closest_BruteForce(points);
sw.Stop();
//Debugger.Log(1, "", string.Format("Time used (Brute force) (float): {0} ms", sw.Elapsed.TotalMilliseconds));
richTextBox.AppendText(string.Format("Time used (Brute force) (float): {0} ms", sw.Elapsed.TotalMilliseconds));
Stopwatch sw2 = Stopwatch.StartNew();
var result2 = MyClosestDivide(points);
sw2.Stop();
//Debugger.Log(1, "", string.Format("Time used (Divide & Conquer): {0} ms", sw2.Elapsed.TotalMilliseconds));
richTextBox.AppendText(string.Format("Time used (Divide & Conquer): {0} ms", sw2.Elapsed.TotalMilliseconds));
//Assert.Equal(r1.Length(), result2.Length());
You can store the points in a better data structure that takes advantage of their position. Something like a quadtree.
The divide and conquer algorithm that you are trying to use doesn't really apply to this problem.
Don't use this algorithm at all, just go through the list one at a time comparing the distance to your reference point and at the end return the point that was the closest. This will be O(n).
You can probably add some extra speed ups but this should be good enough.
I can write some example code if you want.
You're mixing up two different problems. The only reason divide and conquer for the closest pair problem is faster than brute force is that it avoids comparing every point to every other point, so that it gets O(n log n) instead of O(n * n). But finding the closest point to just one point is just O(n). How can you find the closest point in a list of n points, while examining less than n points? What you're trying to do doesn't even make sense.
I can't say why your divide and conquer runs in less time than your brute force; maybe the linq implementation runs slower. But I think you'll find two things: 1) Even if, in absolute terms, your implementation of divide and conquer for 1 point runs in less time than your implementation of brute force for 1 point, they still have the same O(n). 2) If you just try a simple foreach loop and record the lowest distance squared, you'll get even better absolute time than your divide and conquer - and, it will still be O(n).
public static float LengthSquared(PointF P1, PointF P2)
{
return (P1.X - P2.X) * (P1.X - P2.X)
+ (P1.Y - P2.Y) * (P1.Y - P2.Y);
}
If, as your question states, you want to compare 1 (known) point to a list of points to find the closest then use this code.
public static Segment Closest_BruteForce(PointF P1, List<PointF> points)
{
PointF closest = null;
float minDist = float.MaxValue;
foreach(PointF P2 in points)
{
if(P1 != P2)
{
float temp = LengthSquared(P1, P2);
if(temp < minDist)
{
minDist = temp;
closest = P2;
}
}
}
return new Segment(P1, closest);
}
However, if as your example shows, you want to find the closest 2 points from a list of points try the below.
public static Segment Closest_BruteForce(List<PointF> points)
{
PointF closest1;
PointF closest2;
float minDist = float.MaxValue;
for(int x=0; x<points.Count; x++)
{
PointF P1 = points[x];
for(int y = x + 1; y<points.Count; y++)
{
PointF P2 = points[y];
float temp = LengthSquared(P1, P2);
if(temp < minDist)
{
minDist = temp;
closest1 = P1;
closest2 = P2;
}
}
}
return new Segment(closest1, closest2);
}
note the code above was written in the browser and may have some syntax errors.
EDIT Odd... is this an acceptable answer or not? Down-votes without explanation, oh well.
For example :
I have the number of list like below : {12,23,34,45,65} to change between 0 and 1 like
{0, 0.2, 0.4, 0.6, 0.8}. Does any body know some algorithm?
double max = 1.0 * oldList.Max();
var newList = oldList.Select(x => x / max);
If you want the lowest number to map to 0 then you'll need something like this:
double min = 1.0 * oldList.Min();
double max = 1.0 * oldList.Max();
var newList = oldList.Select(x => (x - min) / (max - min));
A very easy and efficient way to do this is to do a logit transformation. Logit formula is:
y = ln (x / (1 - x))
where y is a number between 0 and 1, following the binary distribution. Also, it is great because y can be zero or negative.
So let's do the way around it, so we can transform our data (y) into binominal distribution (infinite numbers constricted between 0 and 1).
y = ln (x / (1 - x))
exp(y) = exp (ln(x / (1 - x)) # to get rid of ln() we can make the exp() on both sides
exp(y) = x / (1 - x) # exponential rule: exp(ln(z)) = z
x = exp(y) * (1 - x)
x = exp(y) - x * exp(y)
x + x * exp(y) = exp(y)
x (1 + exp(y) = exp(y)
x = exp(y)/(1 + exp(y))
This is WAY easier and simpler than the other suggestions!
I hope that helps!
Find the max number in the sequence.
Divide all the numbers by the max
number
Algorithm:
find maximum and minimum
divide each (element - minimum) by (maximum - minimum)
Note: This will map the maximum to 1.0 ... which however is not the case in your example.
Edit:
var min = list.First(); // assumes a sorted list, else use Min()
var max = list.Last(); // assumes a sorted list, else use Max()
double difference = max - min;
var newList = list.Select( i => Math.Round( (i - min ) / difference, 1 ) );
If you want the lowest to map to 0, and the highest to map to 1, then the following will achieve this:
var input = new double[] { 12, 23, 34, 45, 65 };
var outputList = input.ToList().Select(i => (i - input.Min()) / (input.Max() - input.Min()));
foreach (var i in outputList)
{
Console.WriteLine(i);
}
expecting you want the highest to be 1.0 and the lowest value to be 0.0 and your array is in order from lowest to highest:
int lowest = myArray[0];
int highest = myArray[myArray.Length-1];
int diff = highest - lowest;
foreach (int item in myArray)
Console.WriteLine = (float)(item - lowest) / diff;