If Best Fit Straight Line the best method for prediction - c#

I need to make prediction for a next point, based on given set of point samples on 2-d coordinate system.
I am using Best-Fit Straight Line method for such prediction.
Please let me know if there is method better than Best-Fit Straight Line?
My code is below:
public class LineEquation
{
public double m; //slope
public double c; //constant in y=mx+c
}
public class Point
{
public double x;
public double y;
}
public class BestFitLine
{
public Point[] points = new Point[7];
public void InputPoints(Point[] points)
{
for (int i = 0; i < points.Length; i++)
{
points[i] = new Point();
}
points[0].x = 12;
points[0].y = 13;
points[1].x = 22;
points[1].y = 23;
points[2].x = 32;
points[2].y = 33;
points[3].x = 42;
points[0].y = 23;
points[4].x = 52;
points[4].y = 33;
points[5].x = 62;
points[5].y = 63;
points[6].x = 72;
points[6].y = 63;
}
public LineEquation CalculateBestFitLine(Point[] points)
{
double constant = 0;
double slope=0;
for (int i = 0; i < points.Length - 1; i++)
{
for (int j = i + 1; j < points.Length; j++)
{
double m = (points[j].y - points[i].y) / (points[j].x - points[i].x);
double c = points[j].y - (m * points[j].x);
constant += c;
slope += m;
}
}
int lineCount =((points.Length-1)*points.Length)/2;
slope = slope / lineCount;
constant = constant / lineCount;
LineEquation eq = new LineEquation();
eq.c = constant;
eq.m = slope;
return eq;
}}

If your x coordinate is composed of dates, you can rely on generalized additive models with the following components :
- trend
- yearly profile
- weekly profile
- daily profile
GAM models are available in R, so I would advice you to use JRI in order to interface your java code with R.
Cheers

I think you could consider smoothing algorithms like Exponential Moving average to predict the near future data points,
http://en.wikipedia.org/wiki/Exponential_smoothing

Related

My neural networks works well only with no hidden layers

You may think I am crazy, but I decided to write a neural network from scratch in C# for studying purposes. Please, be patient, I still have little experience)). English is not my first language, so I am sorry for it in advance.
I started with a program for handwritten digit recognizing with the MNIST database. I've read through a book about the algorithms inside the process and wrote this code.
public class NeuralNetwork
{
private List<Matrix<double>> weights = new List<Matrix<double>>();
private List<Vector<double>> biases = new List<Vector<double>>();
private Random random = new Random();
private List<Image> test_data;
private int[] layer_sizes;
public NeuralNetwork(params int[] layers)
{
layer_sizes = layers;
for (int i = 0; i < layers.Length - 1; i++)
{
var weigthLayer = Matrix<double>.Build.Dense(layers[i + 1], layers[i], (k, j) => random.NextDouble());
weights.Add(weigthLayer);
}
for (int i = 1; i < layers.Length; i++)
{
var biasesLayer = Vector<double>.Build.Dense(layers[i], (k) => random.NextDouble());
biases.Add(biasesLayer);
}
}
public Vector<double> FeedForward(Vector<double> a)
{
for (int i = 0; i < weights.Count; i++)
{
a = Sigmoid(weights[i].Multiply(a) + biases[i]);
}
return Sigmoid(a);
}
public void SGD(ITrainingDataProvider dataProvider, int epochs, int chuck_size, double eta)
{
test_data = new MnistReader().ReadTestData();
Console.WriteLine("SGD algorithm started");
var training_data = dataProvider.ReadTrainingData();
Console.WriteLine("Training data has beeen read");
Console.WriteLine($"Training data test: {Test(training_data)}%");
Console.WriteLine($"Test data test: {Test(test_data)}%");
for (int epoch = 0; epoch < epochs; epoch++)
{
training_data = training_data.OrderBy(item => random.Next()).ToList();
List<List<Image>> chunks = training_data.ChunkBy(chuck_size);
foreach (List<Image> chunk in chunks)
{
ProcessChunk(chunk, eta);
}
Console.WriteLine($"Epoch: {epoch + 1}/{epochs}");
Console.WriteLine($"Training data test: {Test(training_data)}%");
Console.WriteLine($"Test data test: {Test(test_data)}%");
}
Console.WriteLine("Done!");
}
private double Test(List<Image> data)
{
int count = 0;
foreach (Image im in data)
{
var output = FeedForward(im.DataToVector());
int number = output.MaximumIndex();
if (number == (int)im.Label)
{
count++;
}
}
return (double)count / data.Count * 100;
}
private void ProcessChunk(List<Image> chunk, double eta)
{
Delta[] deltas = new Delta[chunk.Count];
for (int i = 0; i < chunk.Count; i++)
{
Image image = chunk[i];
var input = image.DataToVector();
var desired_output = Vector<double>.Build.Dense(layer_sizes[layer_sizes.Length - 1]);
desired_output[(int)image.Label] = 1;
Delta delta = BackPropagation(input, desired_output);
deltas[i] = delta;
}
Delta sum = deltas[0];
for (int i = 1; i < deltas.Length; i++)
{
sum += deltas[i];
}
Delta average_delta = sum / deltas.Length;
for (int i = 0; i < layer_sizes.Length - 1; i++)
{
weights[i] += average_delta.d_weights[i].Multiply(eta);
biases[i] += average_delta.d_biases[i].Multiply(eta);
}
}
private Delta BackPropagation(Vector<double> input, Vector<double> desired_output)
{
List<Vector<double>> activations = new List<Vector<double>>();
List<Vector<double>> zs = new List<Vector<double>>();
Vector<double> a = input;
activations.Add(input);
for (int i = 0; i < layer_sizes.Length - 1; i++)
{
var z = weights[i].Multiply(a) + biases[i];
zs.Add(z);
a = Sigmoid(z);
activations.Add(a);
}
List<Vector<double>> errors = new List<Vector<double>>();
List<Matrix<double>> delta_weights = new List<Matrix<double>>();
List<Vector<double>> delta_biases = new List<Vector<double>>();
var error = CDerivative(activations[activations.Count - 1], desired_output).HProd(SigmoidDerivative(zs[^1]));
errors.Add(error);
int steps = 0;
for (int i = layer_sizes.Length - 2; i >= 1; i--)
{
var layer_error = weights[i].Transpose().Multiply(errors[steps]).HProd(SigmoidDerivative(zs[i - 1]));
errors.Add(layer_error);
steps++;
}
errors.Reverse();
for (int i = layer_sizes.Length - 1; i >= 1; i--)
{
var delta_layer_weights = (errors[i - 1].ToColumnMatrix() * activations[i - 1].ToRowMatrix()).Multiply(-1);
delta_weights.Add(delta_layer_weights);
var delta_layer_biases = errors[i - 1].Multiply(-1);
delta_biases.Add(delta_layer_biases);
}
delta_biases.Reverse();
delta_weights.Reverse();
return new Delta { d_weights = delta_weights, d_biases = delta_biases };
}
private Vector<double> CDerivative(Vector<double> x, Vector<double> y)
{
return x - y;
}
private Vector<double> Sigmoid(Vector<double> x)
{
for (int i = 0; i < x.Count; i++)
{
x[i] = 1.0 / (1.0 + Math.Exp(-x[i]));
}
return x;
}
private Vector<double> SigmoidDerivative(Vector<double> x)
{
for (int i = 0; i < x.Count; i++)
{
x[i] = Math.Exp(-x[i]) / Math.Pow(1.0 + Math.Exp(-x[i]), 2);
}
return x;
}
}
Delta class. A simple DTO to store weights and biases changes in a single object.
public class Delta
{
public List<Matrix<double>> d_weights { get; set; }
public List<Vector<double>> d_biases { get; set; }
public static Delta operator +(Delta d1, Delta d2)
{
Delta result = d1;
for (int i = 0; i < d2.d_weights.Count; i++)
{
result.d_weights[i] += d2.d_weights[i];
}
for (int i = 0; i < d2.d_biases.Count; i++)
{
result.d_biases[i] += d2.d_biases[i];
}
return result;
}
public static Delta operator /(Delta d1, double d)
{
Delta result = d1;
for (int i = 0; i < d1.d_weights.Count; i++)
{
result.d_weights[i] /= d;
}
for (int i = 0; i < d1.d_biases.Count; i++)
{
result.d_biases[i] /= d;
}
return result;
}
}
Everything ended up working fine, however complex networks with 1 or more hidden layers don't show any significant results. They are getting best 70% accuracy and then the learning curve drops. The accuracy returns to its 20-30%. Typically, the graph looks like a square root function, but in my case it is more like a turned around quadratic parabola the graph of my tries with different amounts of neurons in the first hidden layer
After a few tries, I found out, that without any hidden layers the algorithm works just fine. It learns up to 90% of accuracy and then the graph never falls down. Apparently, the bug is somewhere in back-propagation algorithm. It doesn't cause any problems with only input and output layers, but it does, when I add a hidden layer.
I have been trying to find the problem for a long time and I hope that someone, smarter than me, will be able to help.
Thanks in advance!

Interpolating 2d array of points

I have a 2d array of points (each point stored in my Points struct, which contains only 3 properties: X Y Z), with size of 128x128. I want to interpolate (stretch) this 2d array to greater size (132x132 for example). So far I have managed to interpolate X and Y coordinates of each point using linear interpolation (simply taking coordinates into array of double, interpolating it, and returning it to new 2d array of Points). Here is the code of linear interpolation:
private double[] InterpolateArray(double[] array, int newLength)
{
double[] result = new double[newLength];
result[0] = array[0];
result[newLength - 1] = array[array.Length - 1];
for (int i = 1; i < newLength - 1; i++)
{
double jd = ((double)i * (double)(array.Length - 1) / (double)(newLength - 1));
int j = (int)jd;
result[i] = array[j] + (array[j + 1] - array[j]) * (jd - (double)j);
}
return result;
}
Problem is that I have no idea how to interpolate Z coordinates of each point. Could that be done same way like X and Y coordinates? Or completely different approach is needed?
Edit: code for my Points struct:
public struct Points
{
public double X { get; set; }
public double Y { get; set; }
public double Z { get; set; }
public Points(double X, double Y, double Z)
{
this.X = X;
this.Y = Y;
this.Z = Z;
}
}
And code which does interpolation (or should do it at least):
public Points[,] Interpolate(Points[,] array, int newWidth, int newHeight)
{
Points[,] result = new Points[newWidth, newHeight];
double[] bufferBefore = new double[domes.GetLength(0)];
double[] bufferAfter = new double[newWidth];
for (int i = 0; i < domes.GetLength(1); i++)
{
for (int j = 0; j < domes.GetLength(0); j++)
{
bufferBefore[j] = domes[j, i].X;
}
bufferAfter = InterpolateArray(bufferBefore, newWidth);
for (int j = 0; j < newWidth; j++)
{
result[j, i].X = bufferAfter[j];
}
}
bufferBefore = new double[domes.GetLength(1)];
bufferAfter = new double[newHeight];
for (int i = 0; i < newWidth; i++)
{
for (int j = 0; j < domes.GetLength(1); j++)
{
bufferBefore[j] = result[i, j].Y;
}
bufferAfter = InterpolateArray(bufferBefore, newHeight);
for (int j = 0; j < newHeight; j++)
{
result[i, j].Y = bufferAfter[j];
}
}
return result;
}
I cannot see any way your current code is correct. In the second part you are doing bufferBefore[j] = result[i, j].Y;, but as far as I can see this has not been set anywhere before.
So when interpolating you need to interpolate all the values. So you would need to repeat the interpolation for each of x/y/z for each direction.
This however be done a bit simpler by implement the multiply operator for your point struct. This could allow you to do the interpolation for both x/y at once with code looking something like this:
public static Vector3 BilinearSample(this Vector3[,] self, in Vector2 point)
{
var xi = (int)Math.Floor(point.X);
var xi1 = xi + 1;
var xf = point.X - xi;
var yi = (int)Math.Floor(point.Y);
var yi1 = yi + 1;
var yf = point.Y - yi;
// You might need to add checks to ensure the sample is fully inside the image etc.
var v1 = self[xi, yi];
var v2 = self[xi1, yi];
var v3 = self[xi, yi1];
var v4 = self[xi1, yi1];
// Do the bilinear sample
var xfn1 = 1 - xf;
var v5 = v1 * xfn1 + v2 * xf;
var v6 = v3 * xfn1 + v4 * xf;
var result = v5 * (1 - yf) + v6 * yf;
return result;
}
I'm using System.Numerics instead of Point , but the principle is the same. Note that the above code would be identicalif you change the input/output type to Vector2 or double.
Also note that if the values represents direction or rotation you need to be careful, since the resulting directions might not be normalized after interpolation, and might even be zero.

how to continuously generate Perlin noise as an infinite map grows?

EDIT: I ended up using the FastNoise library found here: https://github.com/Auburns/FastNoise It has everything under the sun which someone might need to generate many different kinds of noise. As the title might suggest its also quite fast!
I'm creating a 2D infinitely, procedural generated world. I load and unload chunks from disc as the player moves. I'm using a cellular automata function to define how local tiles within each chunk are designed but I need noise (Perlin in this case) to define what biome type each chunk will be as new ones are created. I understand how I would translate decimals between 0 and 1 to represent this, my only issue is that the tutorial I followed on creating Perlin noise requires that you pass it a predefined 2d array and returns a noise array of the same size. Because my world grows dynamically I'm a bit confused on how I would use a fixed sized array to designate new chunk types.
Other answers I've seen don't cover exactly how to handle the infinite part of noise generation. My best guess is that I need to somehow generate or expand the noise with each newly created chunk although how I do this stumps me.
Here is some code I translated to C# from here: http://devmag.org.za/2009/04/25/perlin-noise/
admittedly some of the math here I don't fully understand yet, especially the bitwise function!
public class PerlinNoiseGenerator
{
public int OctaveCount { get; set; }
public float Persistence { get; set; }
public PerlinNoiseGenerator(int octaveCount, float persistence)
{
this.OctaveCount = octaveCount;
this.Persistence = persistence;
}
public float[,] GenerateWhiteNoise(int width, int height)
{
float[,] noiseFieldToReturn = new float[width, height];
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
noiseFieldToReturn[i, j] = (float)Game1.Utility.RGenerator.NextDouble() % 1;
}
}
return noiseFieldToReturn;
}
public float[,] SmoothNoiseField(float[,] whiteNoise, int octave)
{
int width = whiteNoise.GetLength(0);
int height = whiteNoise.GetLength(1);
float[,] smoothField = new float[width, height];
int samplePeriod = 1 << octave;
float sampleFrequency = 1.0f / samplePeriod;
for(int i =0; i < width; i++)
{
int samplei0 = (i / samplePeriod) * samplePeriod;
int samplei1 = (samplei0 + samplePeriod) % width;
float horizontalBlend = (i - samplei0) * sampleFrequency;
for(int j =0; j < height; j++)
{
int samplej0 = (j/samplePeriod) * samplePeriod;
int samplej1 = (samplej0 + samplePeriod) % height;
float verticalBlend = (j - samplej0) * sampleFrequency;
float top = LinearInterpolate(whiteNoise[samplei0, samplej0],
whiteNoise[samplei1, samplej0], horizontalBlend);
float bottom = LinearInterpolate(whiteNoise[samplei0, samplej1],
whiteNoise[samplei1, samplej1], horizontalBlend);
smoothField[i, j] = LinearInterpolate(top, bottom, verticalBlend);
}
}
return smoothField;
}
public float[,] GeneratePerlinNoise(float[,] baseNoise, int octaveCount)
{
int width = baseNoise.GetLength(0);
int height = baseNoise.GetLength(1);
float[][,] smoothNoise = new float[octaveCount][,];
float persistance = .5f;
for(int i =0; i < octaveCount;i++)
{
smoothNoise[i] = SmoothNoiseField(baseNoise, i);
}
float[,] perlinNoise = new float[width, height];
float amplitude = 1f;
float totalAmplitude = 0.0f;
for(int octave = octaveCount - 1; octave > 0; octave-- )
{
amplitude *= persistance;
totalAmplitude += amplitude;
for(int i =0; i < width;i++)
{
for(int j =0; j < height; j++)
{
perlinNoise[i, j] += smoothNoise[octave][i, j] * amplitude;
}
}
}
for(int i =0; i < width; i++)
{
for(int j =0; j < height; j++)
{
perlinNoise[i, j] /= totalAmplitude;
}
}
return perlinNoise;
}
public float LinearInterpolate(float a, float b, float alpha)
{
return a * (1 - alpha) + alpha * b;
}
}
This code should compile and produces a FIXED size array of noise
The main thing you want to make sure is that the starting random noise is pseudo random, so you always have a "fixed random value" for a coordinate.
It might be that you'd have to rewrite your random noise generator, using the coordinates as input. I imagine your maps have a random seed number, so you could use this post as a starting point, adding 1 factor:
A pseudo-random number generator based on 2 inputs
For a bit of inspiration for your map making, I wrote this article a while back: https://steemit.com/map/#beeheap/create-a-fantasy-grid-map-in-excel
Added after your comment: the only function you'd need to change is the GenerateWhiteNoise one. I don't speak C#, but this is the general idea:
GenerateWhiteNoise(int x_start_coord, int y_start_coord, int random_seed) {
int default_x_width = 100;
int default_y_heigth = 50;
float[,] noiseFieldToReturn = new float[width, height];
for (in x = x_start_coord; i < default_x_width + x_start_coord; x++)
{
for (in y = y_start_coord; i < default_y_width + y_start_coord; y++)
{
noiseFieldToReturn[i, j] = (float)pseudo_rnd_value(x, y, random_seed);
}
}
return noiseFieldToReturn;
}
That should give you the pseudo random values you need to build your map tiles, the only thing you need is the coordinate of the player (x and y).

Laplace Transform And Getting The Frequent Value For Gyro

I'm getting x,y,z values from gyro-sensor. Each variable is being sent 10 values per second. In 3 seconds I have;
x=[30values]
y=[30values]
z=[30values]
Some of the values are too different from the others cause of noise. With laplace transform I need to get the most frequent value from my array.
I need to filter the array with Laplace Transform equation. I need to build the equation in C#. How can I implement the array with the equation?
Since this kind of filter (Laplace) is very specialized to certain area of Engineering and needs a person who has good understanding on both the programming language (in this case is C#) and the filter itself, I would recommend you to use such source, rather than code the filter by yourself.
Here is the snippet of the source code:
class Laplace
{
const int DefaultStehfest = 14;
public delegate double FunctionDelegate(double t);
static double[] V; // Stehfest coefficients
static double ln2; // log of 2
public static void InitStehfest(int N)
{
ln2 = Math.Log(2.0);
int N2 = N / 2;
int NV = 2 * N2;
V = new double[NV];
int sign = 1;
if ((N2 % 2) != 0)
sign = -1;
for (int i = 0; i < NV; i++)
{
int kmin = (i + 2) / 2;
int kmax = i + 1;
if (kmax > N2)
kmax = N2;
V[i] = 0;
sign = -sign;
for (int k = kmin; k <= kmax; k++)
{
V[i] = V[i] + (Math.Pow(k, N2) / Factorial(k)) * (Factorial(2 * k)
/ Factorial(2 * k - i - 1)) / Factorial(N2 - k)
/ Factorial(k - 1) / Factorial(i + 1 - k);
}
V[i] = sign * V[i];
}
}
public static double InverseTransform(FunctionDelegate f, double t)
{
double ln2t = ln2 / t;
double x = 0;
double y = 0;
for (int i = 0; i < V.Length; i++)
{
x += ln2t;
y += V[i] * f(x);
}
return ln2t * y;
}
public static double Factorial(int N)
{
double x = 1;
if (N > 1)
{
for (int i = 2; i <= N; i++)
x = i * x;
}
return x;
}
}
coded by Mr. Walt Fair Jr. in CodeProject.

Calculating exponential growth equation from data points c#

I am trying to analyse some data using a C# app and need to calculate trend lines. I am aware that there are multiple types of trend line but for now I am trying to calculate exponential growth; I am going to be using it to predict future values. The equation I have been working off is
x(t) = x(0) * ((1+r)^t)
And this is the code that I have written to try and replicate the graph:
public void ExponentialBestFit(List<DateTime> xvalues, List<double> yvalues)
{
//Find the first value of y (The start value) and the first value of x (The start date)
xzero = Convert.ToDouble(xvalues[0].ToOADate());
yzero = yvalues[0];
if (yzero == 0)
yzero += 0.1;
//For every value of x (exluding the 1st value) find the r value
//
// | y | Where t = the time sinse the start time (time period)
//Equation for r = t root|-------| - 1 Where y = the current y value
// | y[0] | Where y[0] = the first y value #IMPROVMENT - Average 1st y value in range
//
double r = 0;
//c is a count of how many r values are added; it is not equal to the count of all the values
int c = 0;
for (int i = 1; i < xvalues.Count; i++)
{
r += Math.Pow(yvalues[i]/yzero, 1/(Convert.ToDouble(xvalues[i].ToOADate()) - xzero)) - 1;
c++;
}
r = r / c;
}
The data I am passing in is over a period of time however the increments in which the time increases are not the same. When I created a chart in excel they use a different formula
x(t) = x(0)*(e^kt)
I think however I have no idea where the k value is being generated from. The two lists that I am passing in are Date and Value and each row in each list corresponds to the same row in the other list. The question is - Is there a better way of creating the equation and variables and are the variables I am getting the most accurate it can be for my data?
This is the c# version of the javascript provided.
// Calculate Exponential Trendline / Growth
IEnumerable<double> Growth(IList<double> knownY, IList<double> knownX, IList<double> newX, bool useConst)
{
// Credits: Ilmari Karonen
// Default values for optional parameters:
if (knownY == null) return null;
if (knownX == null)
{
knownX = new List<double>();
for (var i = 0; i<=knownY.Count; i++)
knownX.Add(i);
}
if (newX == null)
{
newX = new List<double>();
for (var i = 0; i <= knownY.Count; i++)
newX.Add(i);
}
int n = knownY.Count;
double avg_x = 0.0;
double avg_y = 0.0;
double avg_xy = 0.0;
double avg_xx = 0.0;
double beta = 0.0;
double alpha = 0.0;
for (var i = 0; i < n; i++)
{
var x = knownX[i];
var y = Math.Log(knownY[i]);
avg_x += x;
avg_y += y;
avg_xy += x * y;
avg_xx += x * x;
}
avg_x /= n;
avg_y /= n;
avg_xy /= n;
avg_xx /= n;
// Compute linear regression coefficients:
if (useConst)
{
beta = (avg_xy - avg_x * avg_y) / (avg_xx - avg_x * avg_x);
alpha = avg_y - beta * avg_x;
}
else
{
beta = avg_xy / avg_xx;
alpha = 0.0;
}
// Compute and return result array:
return newX.Select(t => Math.Exp(alpha + beta*t)).ToList();
}
The following JavaScript code should help. I used it to implement Excel's GROWTH function. It's written in JavaScript, but porting it to C# should be very easy. Please note that most of it was written by someone else (credits in the code).
function GROWTH(known_y, known_x, new_x, use_const) {
// Credits: Ilmari Karonen
// Default values for optional parameters:
if (typeof(known_x) == 'undefined') {
known_x = [];
for (var i = 1; i <= known_y.length; i++) known_x.push(i);
}
if (typeof(new_x) == 'undefined') {
new_x = [];
for (var i = 1; i <= known_y.length; i++) new_x.push(i);
}
if (typeof(use_const) == 'undefined') use_const = true;
// Calculate sums over the data:
var n = known_y.length;
var avg_x = 0;
var avg_y = 0;
var avg_xy = 0;
var avg_xx = 0;
for (var i = 0; i < n; i++) {
var x = known_x[i];
var y = Math.log(known_y[i]);
avg_x += x;
avg_y += y;
avg_xy += x*y;
avg_xx += x*x;
}
avg_x /= n;
avg_y /= n;
avg_xy /= n;
avg_xx /= n;
// Compute linear regression coefficients:
if (use_const) {
var beta = (avg_xy - avg_x*avg_y) / (avg_xx - avg_x*avg_x);
var alpha = avg_y - beta*avg_x;
} else {
var beta = avg_xy / avg_xx;
var alpha = 0;
}
// Compute and return result array:
var new_y = [];
for (var i = 0; i < new_x.length; i++) {
new_y.push(Math.exp(alpha + beta * new_x[i]));
}
return new_y;
}
Since x(t)=x(0)*e^{kt}, we can take logarithms to get ln x(t)=ln x(0) + kt. This means that to find ln x(0) and k, you can find the least squares fit for the data {(t,ln x(t))}. This will tell you that ln x(t) = b + at, so that k=a and x(0)=e^b.

Categories