Calculating Aroon Indicator Series - c#

I'm trying to build a class to create Aroon series. But it seems I don't understand the steps well. I'm not sure about for what purpose I have to use the period parameter.
Here is my first attempt:
/// <summary>
/// Aroon
/// </summary>
public class Aroon : IndicatorCalculatorBase
{
public override List<Ohlc> OhlcList { get; set; }
public int Period { get; set; }
public Aroon(int period)
{
this.Period = period;
}
/// <summary>
/// Aroon up: {((number of periods) - (number of periods since highest high)) / (number of periods)} x 100
/// Aroon down: {((number of periods) - (number of periods since lowest low)) / (number of periods)} x 100
/// </summary>
/// <see cref="http://www.investopedia.com/ask/answers/112814/what-aroon-indicator-formula-and-how-indicator-calculated.asp"/>
/// <returns></returns>
public override IIndicatorSerie Calculate()
{
AroonSerie aroonSerie = new AroonSerie();
int indexToProcess = 0;
while (indexToProcess < this.OhlcList.Count)
{
List<Ohlc> tempOhlc = this.OhlcList.Skip(indexToProcess).Take(Period).ToList();
indexToProcess += tempOhlc.Count;
for (int i = 0; i < tempOhlc.Count; i++)
{
int highestHighIndex = 0, lowestLowIndex = 0;
double highestHigh = tempOhlc.Min(x => x.High), lowestLow = tempOhlc.Max(x => x.Low);
for (int j = 0; j < i; j++)
{
if (tempOhlc[j].High > highestHigh)
{
highestHighIndex = j;
highestHigh = tempOhlc[j].High;
}
if (tempOhlc[j].Low < lowestLow)
{
lowestLowIndex = j;
lowestLow = tempOhlc[j].Low;
}
}
int up = ((this.Period - (i - highestHighIndex)) / this.Period) * 100;
aroonSerie.Up.Add(up);
int down = ((this.Period - (i - lowestLowIndex)) / this.Period) * 100;
aroonSerie.Down.Add(down);
}
}
return aroonSerie;
}
}
Is there anyone else tried to do that before?
Here is the csv file I use:
https://drive.google.com/file/d/0Bwv_-8Q17wGaRDVCa2FhMWlyRUk/view
But the result sets for Aroon up and down don't match with the results of aroon function in TTR package for R.
table <- read.csv("table.csv", header = TRUE, sep = ",")
trend <- aroon(table[,c("High", "Low")], n=5)
View(trend)
Screenshot of R result:
Thanks in advance,

#anilca,
for full Disclosure, I learned alot of things to answer your question(i knew nothing in the Finetec...). Thank you! it was an interesting experience!
There are several problems in your implementation:
In the statements:
i - highestHighIndex
and
i - lowestLowIndex
the variable "i" is less or equal to highestHighIndex, lowestLowIndex
so statements:
this.Period - (i - highestHighIndex)
and
this.Period - (i - lowestLowIndex)
will returns the wrong values (most of the time...)
Aroon up and down both of them are percentage, therefore "int" is a wrong data structure.
Because all variables in:
(this.Period - (i - highestHighIndex)) / this.Period)
and
((this.Period - (i - lowestLowIndex)) / this.Period)
are integers you won't recieve the right value.
one more thing the numbers in your excel are sorted from the newest to the oldest.(it effects on the R package)
I've implemented the algorithm based on your code(and your data order...)
public class Aroon : IndicatorCalculatorBase
{
public override List<OhlcSample> OhlcList { get; set; }
private readonly int _period;
public int Period
{
get { return _period; }
}
public Aroon(int period)
{
_period = period;
}
public override IIndicatorSerie Calculate()
{
var aroonSerie = new AroonSerie();
for (var i = _period; i < OhlcList.Count; i++)
{
var aroonUp = CalculateAroonUp(i);
var aroonDown = CalculateAroonDown(i);
aroonSerie.Down.Add(aroonDown);
aroonSerie.Up.Add(aroonUp);
}
return aroonSerie;
}
private double CalculateAroonUp(int i)
{
var maxIndex = FindMax(i - _period, i);
var up = CalcAroon(i - maxIndex);
return up;
}
private double CalculateAroonDown(int i)
{
var minIndex = FindMin(i - _period, i);
var down = CalcAroon(i - minIndex);
return down;
}
private double CalcAroon(int numOfDays)
{
var result = ((_period - numOfDays)) * ((double)100 / _period);
return result;
}
private int FindMin(int startIndex, int endIndex)
{
var min = double.MaxValue;
var index = startIndex;
for (var i = startIndex; i <= endIndex; i++)
{
if (min < OhlcList[i].Low)
continue;
min = OhlcList[i].Low;
index = i;
}
return index;
}
private int FindMax(int startIndex, int endIndex)
{
var max = double.MinValue;
var index = startIndex;
for (var i = startIndex; i <= endIndex; i++)
{
if (max > OhlcList[i].High)
continue;
max = OhlcList[i].High;
index = i;
}
return index;
}
}
public abstract class IndicatorCalculatorBase
{
public abstract List<OhlcSample> OhlcList { get; set; }
public abstract IIndicatorSerie Calculate();
}
public interface IIndicatorSerie
{
List<double> Up { get; }
List<double> Down { get; }
}
internal class AroonSerie : IIndicatorSerie
{
public List<double> Up { get; private set; }
public List<double> Down { get; private set; }
public AroonSerie()
{
Up = new List<double>();
Down = new List<double>();
}
}
public class OhlcSample
{
public double High { get; private set; }
public double Low { get; private set; }
public OhlcSample(double high, double low)
{
High = high;
Low = low;
}
}
Use this test method for debugging:
private Aroon _target;
[TestInitialize]
public void TestInit()
{
_target=new Aroon(5)
{
OhlcList = new List<OhlcSample>
{
new OhlcSample(166.90, 163.65),
new OhlcSample(165.00, 163.12),
new OhlcSample(165.91, 163.21),
new OhlcSample(167.29, 165.11),
new OhlcSample(169.99, 166.84),
new OhlcSample(170.92, 167.90),
new OhlcSample(168.47, 165.90),
new OhlcSample(167.75, 165.75),
new OhlcSample(166.14, 161.89),
new OhlcSample(164.77, 161.44),
new OhlcSample(163.19, 161.49),
new OhlcSample(162.50, 160.95),
new OhlcSample(163.25, 158.84),
new OhlcSample(159.20, 157.00),
new OhlcSample(159.33, 156.14),
new OhlcSample(160.00, 157.00),
new OhlcSample(159.35, 158.07),
new OhlcSample(160.70, 158.55),
new OhlcSample(160.90, 157.66),
new OhlcSample(164.38, 158.45),
new OhlcSample(167.75, 165.70),
new OhlcSample(168.93, 165.60),
new OhlcSample(165.73, 164.00),
new OhlcSample(167.00, 164.66),
new OhlcSample(169.35, 165.01),
new OhlcSample(168.12, 164.65),
new OhlcSample(168.89, 165.79),
new OhlcSample(168.65, 165.57),
new OhlcSample(170.85, 166.00),
new OhlcSample(171.61, 169.10)
}
};
}
[TestMethod]
public void JustToHelpYou()
{
var result = _target.Calculate();
var expectedUp = new List<double>()
{
100,80,60,40,20,0,0,0, 0,0,40,20,0,100,100,100,100,80, 60,100,80,60,40,100,100
};
var expectedDown = new List<double>
{
20,0,0,100,100,80,100,100,100,100,80,60,40,20,0,0,40,20,0,0,40,20,0,40,20
};
Assert.IsTrue( result.Up.SequenceEqual(expectedUp));
Assert.IsTrue( result.Down.SequenceEqual(expectedDown));
}

Just Add implementation of method HighestBarNum and LowestBarnum from your code
public class Aroon
{
public bool AroonDown
{
get;
set;
}
public double Period
{
get;
set;
}
public Aroon()
{
}
public IList<double> Execute(IList<double> src)
{
if (!this.AroonDown)
{
return this.ExecuteUp(src);
}
return this.ExecuteDown(src);
}
public IList<double> ExecuteDown(IList<double> src)
{
double[] period = new double[src.Count];
for (int i = 1; i < src.Count; i++)
{
double num = LowestBarNum(src, i, Period);
period[i] = 100 * (Period - num) / Period;
}
return period;
}
public IList<double> ExecuteUp(IList<double> src)
{
double[] period = new double[src.Count];
for (int i = 1; i < src.Count; i++)
{
double num = HighestBarNum(src, i, Period);
period[i] = 100 * ((Period - num) / Period;
}
return period;
}}

class AroonData
{
public double AroonUp;
public double AroonDown;
}
Calculation Class:
class AroonCalculationData
{
public double PeriodHigh;
public double PeriodLow;
public double SetAroonUp(List<MarketData> period, double lastHigh)
{
/*reverse the set so we can look at it from the current tick
on back, and ticksSinceHigh will set correctly*/
period.Reverse();
int ticksSinceHigh = 0;
double high = 0.0;//Set 0 so anything will be higher
for (int i = 0; i < period.Count; i++)
{
if (period[i].high > high)
{
high = period[i].high;
ticksSinceHigh = i;
}
}
/*This bit if for if the high just ticked out
of List<MarketData>.
Including the current tick (i = 0), List<MarketData> period
only looks back (period - 1) days.
This Checks to see if the last high is still in the set. if it's
not, and is still the high for the period, then ticksSinceHigh
is set to (period)*/
PeriodHigh = high;
if (PeriodHigh < lastHigh)
{
ticksSinceHigh = period.Count;
}
/*Aroon-Up Formula*/
return (double)(period.Count - ticksSinceHigh ) / (double)period.Count * 100.0;
}
//ASIDE FROM LOOKING FOR LOWS INSTEAD OF HIGHS, SAME AS AROON-UP
public double SetAroonDown(List<MarketData> period, double lastLow)
{
period.Reverse();
int daysSinceLow = 0;
double low = double.MaxValue;//Set max so anything will be lower
for (int i = 0; i < period.Count; i++)
{
if (period[i].low < low)
{
low = period[i].low;
daysSinceLow = i;
}
}
PeriodLow = low;
if (PeriodLow > lastLow)
{
daysSinceLow = period.Count;
}
return (double)(period.Count - daysSinceLow) / (double)period.Count * 100.0;
}
}
Calling code:
public AroonData[] Aroon(List<MarketData> marketData, int period)
{
AroonCalculationData[] calculationData = new AroonCalculationData[marketData.Count]
AroonData[] aroon= new AroonData[marketData.Count]
for (int i = period; i < marketData.Count; i++)
{
/*GetRange(i - period + 1, period) add 1 so that the current tick is
included in look back.
For example, if (period = 10) the first loop (i = 10) then (i -
period + 1) = 1 the set will be marketData 1 - 10.*/
/*calculationData[i - 1].PeriodHigh and calculationData[i - 1].PeriodLow
are for looking back at the last tick's look back period high and
low*/
data[i].AroonUp = calculationData[i].SetAroonUp(marketData.GetRange(i - period + 1, period), calculationData[i - 1].PeriodHigh);
data[i].AroonDown = calculationData[i].SetAroonDown(marketData.GetRange(i - period + 1, period), calculationData[i - 1].PeriodLow);
}
}
Side note:
One problem I had was comparing my data to TD Ameritrades Aroon, until i figured out their period is really period-1, so if you're comparing to TD keep that in mind.

Related

C# cannot convert Int to Double

I get the error converting Int to Double, my next line Average[y] = Average[y] + Students[i].Marks; is not working either. So how can convert int[] to double[]?
public static double[] Averages(StudentMarks[] Students, int amount, string[] DifferentGroups, int GroupAmount, out double[] Average)
{
Average = new double[Max];
int y = 0;
int CountGroups = 0;
//int sum = 0;
for (int i = 0; i < amount; i++)
{
if (Students[i].Group == DifferentGroups[y])
{
Convert.ToDouble(Students[i].Marks);
//Average[y] = Average[y] + Students[i].Marks;
}
}
return Average;
}
The main:
static void Main(string[] args)
{
int amount;
StudentMarks[] Students;
string[] DifferentGroups;
int GroupAmount;
double[] Average;
ReadData(out Students, out amount);
DifferentGroups = FindDifferentGroups(Students, amount, out DifferentGroups, out GroupAmount);
Average = Averages(Students, amount, DifferentGroups, GroupAmount, out Average);
Class:
public const int Max = 50;
public string Surname { get; set; }
public string Name { get; set; }
public string Group { get; set; }
public int AmountOfMarks { get; set; }
public int[] Marks { get; set; }
So Student[i].Marks is Array.
Convert.ToDouble returns a double - it does not make an integer property or variable a double.
I think you want:
if (Students[i].Group == DifferentGroups[y])
{
double mark = Convert.ToDouble(Students[i].Marks);
Average[y] = Average[y] + mark;
}
or just:
if (Students[i].Group == DifferentGroups[y])
{
Average[y] = Average[y] + Convert.ToDouble(Students[i].Marks);
}
All Convert.ToXXX functions return the new value. Therefore, you should be expecting that and assigning the return value to a local variable.
Something like:
double YourMark = Convert.ToDouble(Students[i].Marks);
So how can convert int[] to double[]?
By programming. You write code to do it.
var d = new double[Students[i].Marks.Length];
for (int j = 0; j < Students[i].Marks.Length; ++j)
{
d[j] = Convert.ToDouble(Students[i].Marks[j]);
}
Or use LINQ:
var d = Students[i].Marks.Select(n => Convert.ToDouble(n)).ToArray();
Next time a method call won't compile or doesn't work the way you hoped it might, a good first step would be to check the documentation.

Suggestions for non-working Radial Basis Function Neural Network

I'm creating a small C# application to help investigate different designs of Multilayer Perceptron and Radial Basis Function Neural Networks. The MLP is working adequately, but I can't manage to get the RBF Net to work at all.
I've checked and double checked and triple checked the algorithms to see if they match the algorithms that are available online and in papers and books and it seems like they do. I've also checked a colleagues (working) code and compared it to mine to see if there is anything I'd done wrong or left out and found nothing.
So I was hoping some extra eyes and opinions might help me root out the problem, as I've run out of ideas of what to try or where to look. If you want to have a look at or download the entire project you can find it at https://bitbucket.org/floofykh/gameai-coursework
I'll also post here the RBF specific code.
This is the Windows Form which allows the user to input the design of the RBF net and train it.
public partial class RBFForm : Form
{
private const double X_MAX = Math.PI * 2;
private const double X_MIN = 0;
private const double INTERVAL = Math.PI / 90d;
private double m_numPlotPoints;
private double m_noiseValue = 0;
public double StopThreshold { get { return m_stopThreshold; } }
private double m_stopThreshold;
private string m_function = "";
private List<double> m_inputs, m_targets;
private List<RadialBasisFunctionData> m_rbfsData;
private RadialBasisFunctionNetwork m_rbf;
private int m_numRBFs = 1;
private double m_rbfWidth = 1d, m_rbfOffset = 0d, m_rbfSeperation = 0d;
private bool m_changed = true;
private const int testCases = 180;
public RBFForm()
{
InitializeComponent();
ChartArea functionArea = m_functionGraph.ChartAreas.Add("function");
functionArea.AxisX.Maximum = X_MAX;
functionArea.AxisX.Minimum = X_MIN;
functionArea.AxisY.Maximum = 1.5;
functionArea.AxisY.Minimum = -1.5;
ChartArea rbfArea = m_rbfGraph.ChartAreas.Add("RBFs");
rbfArea.AxisX.Maximum = X_MAX;
rbfArea.AxisX.Minimum = X_MIN;
rbfArea.AxisY.Maximum = 1;
rbfArea.AxisY.Minimum = 0;
m_functionGraph.Series.Add("Neural Network");
m_functionGraph.Series.Add("Function");
m_functionGraph.Series.Add("Points");
m_rbfGraph.Series.Add("RBFs");
Neuron.LearningRate = ((double)(m_learningRateSelector).Value);
m_numRBFs = ((int)(m_numRBFsInput).Value);
m_rbfOffset = ((double)(m_rbfOffsetController).Value);
m_rbfSeperation = ((double)(m_rbfOffsetController).Value);
m_rbfWidth = ((double)(m_rbfWidthController).Value);
m_rbf = new RadialBasisFunctionNetwork(this);
}
private void InitialiseFunctionGraph()
{
Series func = m_functionGraph.Series.FindByName("Function");
func.Points.Clear();
func.ChartType = SeriesChartType.Line;
func.Color = Color.Green;
func.BorderWidth = 1;
for (double x = X_MIN; x < X_MAX; x += INTERVAL)
{
double y = 0;
switch (m_function)
{
case "Sin":
y = Math.Sin(x);
break;
case "Cos":
y = Math.Cos(x);
break;
};
func.Points.AddXY(x, y);
}
}
private void InitialiseRBFs()
{
m_rbfsData = new List<RadialBasisFunctionData>();
Series rbfs = m_rbfGraph.Series.FindByName("RBFs");
rbfs.Points.Clear();
rbfs.ChartType = SeriesChartType.Line;
rbfs.Color = Color.IndianRed;
rbfs.BorderWidth = 1;
for(int i=0; i<m_numRBFs; i++)
{
double centre = X_MIN + m_rbfOffset + m_rbfSeperation * i;
RadialBasisFunctionData data = new RadialBasisFunctionData();
data.Centre = centre;
data.Width = m_rbfWidth;
m_rbfsData.Add(data);
DrawRBF(centre, m_rbfWidth, rbfs.Points);
}
}
private void DrawRBF(double centre, double width, DataPointCollection points)
{
if(width > 0)
{
IActivationFunction function = new RadialBasisFunction(centre, width);
for (double x = X_MIN; x < X_MAX; x += INTERVAL)
{
double y = function.Function(x);
points.AddXY(x, y);
}
}
}
private void InitialiseInputPoints()
{
m_inputs = new List<double>();
m_targets = new List<double>();
Series points = m_functionGraph.Series.FindByName("Points");
points.Points.Clear();
points.ChartType = SeriesChartType.Point;
points.Color = Color.Blue;
points.BorderWidth = 1;
double interval = 0d;
if (m_numPlotPoints > 1)
interval = (X_MAX - X_MIN) / (m_numPlotPoints - 1);
for (int point = 0; point < m_numPlotPoints; point++)
{
double x = X_MIN + point * interval;
double y = 0;
switch (m_function)
{
case "Sin":
y = Math.Sin(x);
break;
case "Cos":
y = Math.Cos(x);
break;
};
y += (Program.rand.NextDouble() - 0.5d) * 2d * m_noiseValue;
m_targets.Add(y);
m_inputs.Add(x);
points.Points.AddXY(x, y);
}
}
public void SetNumEpochs(int num)
{
m_numEpochLabel.Text = num.ToString();
}
public void SetNumEpochsAsync(int num)
{
try
{
if (m_numEpochLabel.InvokeRequired)
{
m_numEpochLabel.Invoke((MethodInvoker)delegate
{
m_numEpochLabel.Text = num.ToString();
});
}
}
catch (Exception) { }
}
private void m_rbfSeperationController_ValueChanged(object sender, EventArgs e)
{
double value = ((double)((NumericUpDown)sender).Value);
m_rbfSeperation = value;
InitialiseRBFs();
m_changed = true;
}
private void m_numRBFsInput_ValueChanged(object sender, EventArgs e)
{
int value = ((int)((NumericUpDown)sender).Value);
m_numRBFs = value;
InitialiseRBFs();
m_changed = true;
}
private void m_rbfWidthController_ValueChanged(object sender, EventArgs e)
{
double value = ((double)((NumericUpDown)sender).Value);
m_rbfWidth = value;
InitialiseRBFs();
m_changed = true;
}
private void m_rbfOffsetController_ValueChanged(object sender, EventArgs e)
{
double value = ((double)((NumericUpDown)sender).Value);
m_rbfOffset = value;
InitialiseRBFs();
m_changed = true;
}
private void m_learningRateSelector_ValueChanged(object sender, EventArgs e)
{
double value = ((double)((NumericUpDown)sender).Value);
Neuron.LearningRate = value;
m_changed = true;
}
private void m_momentumController_ValueChanged(object sender, EventArgs e)
{
double value = ((double)((NumericUpDown)sender).Value);
Neuron.MomentumAlpha = value;
m_changed = true;
}
private void m_thresholdController_ValueChanged(object sender, EventArgs e)
{
double value = ((double)((NumericUpDown)sender).Value);
m_stopThreshold = value;
m_changed = true;
}
private void m_functionSelector_SelectedIndexChanged(object sender, EventArgs e)
{
m_function = ((ComboBox)sender).SelectedItem.ToString();
InitialiseFunctionGraph();
m_changed = true;
}
private void m_plotPointsController_ValueChanged(object sender, EventArgs e)
{
double value = ((double)((NumericUpDown)sender).Value);
m_numPlotPoints = value;
InitialiseInputPoints();
m_changed = true;
}
private void m_noiseController_ValueChanged(object sender, EventArgs e)
{
double value = ((double)((NumericUpDown)sender).Value);
m_noiseValue = value;
InitialiseInputPoints();
m_changed = true;
}
private void m_trainButton_Click(object sender, EventArgs e)
{
if (m_rbf != null)
{
if (RadialBasisFunctionNetwork.Running)
{
RadialBasisFunctionNetwork.Running = false;
}
else
{
if (m_changed)
{
m_rbf.Initialise(1, m_rbfsData, 1);
m_changed = false;
}
RadialBasisFunctionNetwork.ErrorStopThreshold = m_stopThreshold;
List<List<double>> inputPatterns = new List<List<double>>();
List<List<double>> targetPatterns = new List<List<double>>();
for (int i = 0; i < m_inputs.Count; i++)
{
List<double> newInputPattern = new List<double>();
newInputPattern.Add(m_inputs[i]);
List<double> newTargetPattern = new List<double>();
newTargetPattern.Add(m_targets[i]);
inputPatterns.Add(newInputPattern);
targetPatterns.Add(newTargetPattern);
}
m_rbf.Train(inputPatterns, targetPatterns);
}
}
}
public void TestAndPresent()
{
List<double> finalData = new List<double>();
for (double x = X_MIN; x < X_MAX; x += INTERVAL)
{
List<double> input = new List<double>();
input.Add(x);
finalData.AddRange(m_rbf.Test(input));
}
PlotNeuralOutput(finalData);
}
public void TestAndPresentAsync()
{
List<double> finalData = new List<double>();
for (double x = X_MIN; x < X_MAX; x += INTERVAL)
{
List<double> input = new List<double>();
input.Add(x);
finalData.AddRange(m_rbf.Test(input));
}
PlotNeuralOutputAsync(finalData);
}
public void PlotNeuralOutput(List<double> output)
{
Series network = m_functionGraph.Series["Neural Network"];
network.Points.Clear();
network.ChartType = SeriesChartType.Line;
network.Color = Color.Red;
network.BorderWidth = 3;
double x = 0;
for (int i = 0; i < output.Count; i++)
{
network.Points.AddXY(x, output[i]);
x += INTERVAL;
}
}
public void PlotNeuralOutputAsync(List<double> output)
{
try
{
if (m_functionGraph.InvokeRequired)
{
m_functionGraph.Invoke((MethodInvoker)delegate
{
Series network = m_functionGraph.Series["Neural Network"];
network.Points.Clear();
network.ChartType = SeriesChartType.Line;
network.Color = Color.Red;
network.BorderWidth = 3;
double x = 0;
for (int i = 0; i < output.Count; i++)
{
network.Points.AddXY(x, output[i]);
x += INTERVAL;
}
});
}
}
catch (Exception) { }
}
}
Here is the RadialBasisFunction class where most of the RBF algorithm takes place, specificaly in FeedForward().
class RadialBasisFunctionNetwork
{
private NeuronLayer m_inputLayer;
private NeuronLayer m_radialFunctions;
private NeuronLayer m_outputLayer;
private int m_numRadialFunctions = 0;
public static bool Running = false;
public static double ErrorStopThreshold {get; set;}
private static int m_epoch = 0;
public static int Epoch { get { return m_epoch; } }
private RBFForm m_RBFForm = null;
public RadialBasisFunctionNetwork(RBFForm RBFForm)
{
m_RBFForm = RBFForm;
m_inputLayer = new NeuronLayer();
m_radialFunctions = new NeuronLayer();
m_outputLayer = new NeuronLayer();
}
public void Initialise(int numInputs, List<RadialBasisFunctionData> radialFunctions, int numOutputs)
{
ErrorStopThreshold = 0d;
m_epoch = 0;
m_numRadialFunctions = radialFunctions.Count;
m_inputLayer.Neurons.Clear();
//Add bias neuron
/*Neuron inputBiasNeuron = new Neuron(1d);
inputBiasNeuron.Initialise(m_numRadialFunctions);
m_inputLayer.Neurons.Add(inputBiasNeuron);*/
for(int i=0; i<numInputs; i++)
{
Neuron newNeuron = new Neuron();
newNeuron.Initialise(m_numRadialFunctions);
m_inputLayer.Neurons.Add(newNeuron);
}
m_outputLayer.Neurons.Clear();
for (int i = 0; i < numOutputs; i++)
{
Neuron newNeuron = new Neuron();
m_outputLayer.Neurons.Add(newNeuron);
}
m_radialFunctions.Neurons.Clear();
//Add bias neuron
/* Neuron outputBiasNeuron = new Neuron(1d);
outputBiasNeuron.Initialise(numOutputs);
outputBiasNeuron.ActivationFunction = new ConstantActivationFunction();
m_radialFunctions.Neurons.Add(outputBiasNeuron);*/
for (int i = 0; i < m_numRadialFunctions; i++)
{
Neuron newNeuron = new Neuron();
newNeuron.Initialise(numOutputs);
newNeuron.ActivationFunction = new RadialBasisFunction(radialFunctions[i].Centre, radialFunctions[i].Width);
m_radialFunctions.Neurons.Add(newNeuron);
}
}
public void Train(List<List<double>> inputs, List<List<double>> targets)
{
Running = true;
BackgroundWorker bw = new BackgroundWorker();
bw.DoWork += new DoWorkEventHandler(
delegate(object o, DoWorkEventArgs args)
{
while (Running)
{
TrainPatterns(inputs, targets);
m_RBFForm.SetNumEpochsAsync(m_epoch);
m_RBFForm.TestAndPresentAsync();
}
});
bw.RunWorkerAsync();
}
private void TrainPatterns(List<List<double>> inputs, List<List<double>> targets)
{
Queue<int> randomIndices = GenRandomNonRepNumbers(inputs.Count, 0, inputs.Count, Program.rand);
bool trained = true;
while(randomIndices.Count > 0)
{
int index = randomIndices.Dequeue();
TrainPattern(inputs[index], targets[index]);
foreach(Neuron neuron in m_outputLayer.Neurons)
{
if (Math.Abs(neuron.Error) > ErrorStopThreshold)
trained = false;
}
}
m_epoch++;
if (trained)
Running = false;
}
public void TrainPattern(List<double> inputs, List<double> targets)
{
InitialisePatternSet(inputs, targets);
FeedForward();
}
public List<double> Test(List<double> inputs)
{
InitialisePatternSet(inputs, null);
FeedForward();
return GetOutput();
}
public void FeedForward()
{
//Feed from input
for(int i=0; i<m_radialFunctions.NumNeurons; i++)
{
Neuron radialFunctionNeuron = m_radialFunctions.Neurons[i];
radialFunctionNeuron.Output = 0d;
for(int j=0; j<m_inputLayer.NumNeurons; j++)
{
Neuron inputNeuron = m_inputLayer.Neurons[j];
radialFunctionNeuron.Output += radialFunctionNeuron.ActivationFunction.Function(inputNeuron.Output);
}
}
//Feed to output
for (int i = 0; i < m_outputLayer.NumNeurons; i++)
{
Neuron outputNeuron = m_outputLayer.Neurons[i];
outputNeuron.Output = 0d;
for (int j = 0; j < m_radialFunctions.NumNeurons; j++)
{
Neuron radialFunctionNeuron = m_radialFunctions.Neurons[j];
outputNeuron.Output += radialFunctionNeuron.Weight(i) * radialFunctionNeuron.Output;
}
outputNeuron.Error = (outputNeuron.Target - outputNeuron.Output);
}
//Update weights
for (int i = 0; i < m_radialFunctions.NumNeurons; i++)
{
Neuron radialFunctionNeuron = m_radialFunctions.Neurons[i];
for (int j = 0; j < m_outputLayer.NumNeurons; j++)
{
Neuron outputNeuron = m_outputLayer.Neurons[j];
if(Math.Abs(outputNeuron.Error) > m_RBFForm.StopThreshold)
radialFunctionNeuron.m_weights[j] += Neuron.LearningRate * outputNeuron.Error * radialFunctionNeuron.Output;
}
}
}
public List<double> GetOutput()
{
List<double> output = new List<double>();
for (int i = 0; i < m_outputLayer.NumNeurons; i++)
{
output.Add(m_outputLayer.Neurons[i].Output);
}
return output;
}
private void InitialisePatternSet(List<double> inputs, List<double> targets)
{
m_inputLayer.SetInputs(inputs, false);
if(targets != null)
{
m_outputLayer.SetTargets(targets);
}
}
private Queue<int> GenRandomNonRepNumbers(int num, int min, int max, Random generator)
{
if (max - min < num)
return null;
Queue<int> numbers = new Queue<int>(num);
for (int i = 0; i < num; i++)
{
int randNum = 0;
do
{
randNum = generator.Next(min, max);
} while (numbers.Contains(randNum));
numbers.Enqueue(randNum);
}
return numbers;
}
}
This is the Radial Basis Function I am using as an activation function
class RadialBasisFunction : IActivationFunction
{
private double m_centre = 0d, m_width = 0d;
public RadialBasisFunction(double centre, double width)
{
m_centre = centre;
m_width = width;
}
double IActivationFunction.Function(double activation)
{
double dist = activation - m_centre;
return Math.Exp(-(dist * dist) / (2 * m_width * m_width));
//return Math.Exp(-Math.Pow(dist / (2 * m_width), 2d));
//return Math.Exp(-Math.Pow(dist, 2d));
}
}
The NeuronLayer class is really just a wrapper around a List of Neurons, and isn't entirely necessary anymore, but I've been focussing on getting everything working rather than keeping my code clean and well designed.
class NeuronLayer
{
public int NumNeurons { get { return Neurons.Count; } }
public List<Neuron> Neurons { get; set; }
public NeuronLayer ()
{
Neurons = new List<Neuron>();
}
public void SetInputs(List<double> inputs, bool skipBias)
{
for (int i = 0; i < Neurons.Count; i++)
{
if(skipBias)
{
if (i != 0)
Neurons[i].Input = inputs[i-1];
}
else
{
Neurons[i].Input = inputs[i];
}
}
}
public void SetTargets(List<double> targets)
{
for (int i = 0; i < Neurons.Count; i++)
{
Neurons[i].Target = targets[i];
}
}
}
And finally the Neuron class. This class was made while I was coding the MLP and while I was still trying to figure out exactly how Neural Nets work. So unfortunately a lot of the code in it is specific to MLPs. I hope to change this once I've got everything working and can start cleaning everything and make the application more user-friendly. I'm going to add all the functions for completeness, but I've checked and double checked and I shouldn't be using any of the MLP specific code of Neuron anywhere in my RBF network. The MLP specific stuff is WithinThreshold, and all the functions after Weight(int).
class Neuron
{
private IActivationFunction m_activationFunction = null;
public IActivationFunction ActivationFunction { get { return m_activationFunction; } set { m_activationFunction = value; } }
public double Input { get { return Output; } set { Output = value; } }
public double Output { get; set; }
public double Error { get; set; }
public double Target { get; set; }
private double m_activation = 0d;
public bool WithinThreshold { get { return Math.Abs(Error) < MultilayerPerceptron.ErrorStopThreshold; } }
public static double LearningRate { get; set; }
public static double MomentumAlpha { get; set; }
public List<double> m_weights;
private List<double> m_deltaWeights;
public Neuron()
{
Output = 0d;
m_weights = new List<double>();
m_deltaWeights = new List<double>();
m_activationFunction = new TanHActFunction();
}
public Neuron(double input)
{
Input = input;
Output = input;
m_weights = new List<double>();
m_deltaWeights = new List<double>();
m_activationFunction = new TanHActFunction();
}
public void Initialise(int numWeights)
{
for(int i=0; i<numWeights; i++)
{
m_weights.Add(Program.rand.NextDouble()*2d - 1d);
}
}
public double Weight(int index)
{
if (m_weights != null && m_weights.Count > index)
return m_weights[index];
return 0d;
}
public void Feed(NeuronLayer layer, int neuronIndex)
{
List<Neuron> inputNeurons = layer.Neurons;
m_activation = 0;
for (int j = 0; j < layer.NumNeurons; j++)
{
m_activation += inputNeurons[j].Output * inputNeurons[j].Weight(neuronIndex);
}
Output = m_activationFunction.Function(m_activation);
}
public void CalculateError(NeuronLayer successor, bool outputLayer)
{
if(outputLayer)
{
Error = (Target - Output) * ActivationFunction.FunctionDeriv(Output);
}
else
{
Error = 0d;
for(int i=0; i<successor.NumNeurons; i++)
{
Neuron neuron = successor.Neurons[i];
Error += (neuron.Error * m_weights[i] * ActivationFunction.FunctionDeriv(Output));
}
}
}
public void UpdateWeights(NeuronLayer successor)
{
if (MomentumAlpha != 0)
{
for (int i = 0; i < successor.NumNeurons; i++)
{
var neuron = successor.Neurons[i];
if (m_deltaWeights.Count <= i)
{
double deltaWeight = LearningRate * neuron.Error * Output;
m_weights[i] += deltaWeight;
m_deltaWeights.Add(deltaWeight);
}
else
{
double deltaWeight = /*(1 - MomentumAlpha)*/LearningRate * neuron.Error * Output + MomentumAlpha * m_deltaWeights[i];
m_weights[i] += deltaWeight;
m_deltaWeights[i] = deltaWeight;
}
}
}
else
{
for (int i = 0; i < successor.NumNeurons; i++)
{
var neuron = successor.Neurons[i];
double deltaWeight = LearningRate * neuron.Error * Output;
m_weights[i] += deltaWeight;
}
}
}
}
I hope some extra eyes and opinions helps find my problem.
PS If you do download the source code from the repository and try run the application, please be aware that it will break if you don't set all necessary values before training, or if you press the reset button. I should get rid of the reset button, but I haven't yet. Sorry!
I found where the problem was.
I was updating the weights inside the FeedForward method. However, that method is called both when training and testing the network. So I was updating weights when I shouldn't have been, while testing the network.

How could I genericize a range expansion function?

I have the following class:
public class State {
public DateTime Created { get; set; }
public Double A { get; set; }
public Double B { get; set; }
}
Then I have two states:
State start = new State { Created = DateTime.UtcNow, A = 10.2, B = 20.5 }
State end = new State { Created = DateTime.UtcNow.AddDays(40), A = 1, B = 80 }
I would like to create a List between start and end where the values of A and B evolve in a linear way between their start and end values.
I was able to create a List as follows:
IList<DateTime> dates = new Range<DateTime>(start.Created, end.Created,).Expand(DateInterval.Day, 1)
Range and Expand are a few helpers I created ...
What is the best way to create the values evolution?
My idea would to do something like:
IList<State> states = new Range<State>( // here "tell" how each property would evolve ...)
UPDATE
Range Class
public class Range<T> where T : IComparable<T> {
private T _minimum;
private T _maximum;
public T Minimum { get { return _minimum; } set { value = _minimum; } }
public T Maximum { get { return _maximum; } set { value = _maximum; } }
public Range(T minimum, T maximum) {
_minimum = minimum;
_maximum = maximum;
} // Range
public Boolean Contains(T value) {
return (Minimum.CompareTo(value) <= 0) && (value.CompareTo(Maximum) <= 0);
} // Contains
public Boolean ContainsRange(Range<T> Range) {
return this.IsValid() && Range.IsValid() && this.Contains(Range.Minimum) && this.Contains(Range.Maximum);
} // ContainsRange
public Boolean IsInsideRange(Range<T> Range) {
return this.IsValid() && Range.IsValid() && Range.Contains(this.Minimum) && Range.Contains(this.Maximum);
} // IsInsideRange
public Boolean IsValid() {
return Minimum.CompareTo(Maximum) <= 0;
} // IsValid
public override String ToString() {
return String.Format("[{0} - {1}]", Minimum, Maximum);
} // ToString
} // Range
A few Range Extensions:
public static IEnumerable<Int32> Expand(this Range<Int32> range, Int32 step = 1) {
Int32 current = range.Minimum;
while (current <= range.Maximum) {
yield return current;
current += step;
}
} // Expand
public static IEnumerable<DateTime> Expand(this Range<DateTime> range, TimeSpan span) {
DateTime current = range.Minimum;
while (current <= range.Maximum) {
yield return current;
current = current.Add(span);
}
} // Expand
var magicNumber = 40;
var start = new State {Created = DateTime.UtcNow, A = 10.2, B = 20.5};
var end = new State {Created = DateTime.UtcNow.AddDays(magicNumber), A = 1, B = 80};
var stateList = new List<State>(magicNumber);
for (var i = 0; i < magicNumber; ++i)
{
var newA = start.A + (end.A - start.A) / magicNumber * (i + 1);
var newB = start.B + (end.B - start.B) / magicNumber * (i + 1);
stateList.Add(new State { Created = DateTime.UtcNow.AddDays(i + 1), A = newA, B = newB });
}
So you have moved to the stage of genericization where a "mutation step" injectable method that will apply your interpolation becomes logical. Because you don't constrain far enough to know how to interpolate steps, you may need a factory method to create your Func<T,T> instances.
For example, if there are two properties whose values shift in your T, you would be able to control their rate of change this way.
public class Point
{
int X {get;set;}
int Y {get;set;}
}
public static IEnumerable<T> Expand(this Range<T> range, Func<T,T> stepFunction)
where T: IComparable<T>
{
var current = range.Minimum;
while (range.Contains(current))
{
yield return current;
current = stepFunction(current);
}
}
// in method
var pointRange = new Range<Point>(
new Point{ X=0,Y=0}, new Point{ X=5, Y=20});
foreach (Point p in pointRange.Expand(p => new Point{ p.X + 1, p.Y + 4})

Using Neural Network to solve Y = X * X + b type formula

Update 1/6/2014: I've updated the question so that I'm trying to solve a non-linear equation. As many of you pointed out I didn't need the extra complexity (hidden-layer, sigmoid function, etc) in order to solve a non-linear problem.
Also, I realize I could probably solve even non-linear problems like this using other means besides neural networks. I'm not trying to write the most efficient code or the least amount of code. This is purely for me to better learn neural networks.
I've created my own implementation of back propagated neural network.
It is working fine when trained to solve simple XOR operations.
However now I want to adapt it & train it to solve Y = X * X + B type formulas, but I'm not getting expected results. After training the network does not calculate the correct answers. Are neural networks well-suited for solving algebra equations like this? I realize my example is trivial I'm just trying to learn more about neural networks and their capabilities.
My hidden layer is using a sigmoid activation function and my output layer is using an identity function.
If you could analyze my code and point out any errors I'd be grateful.
Here is my full code (C# .NET):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace NeuralNetwork
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Training Network...");
Random r = new Random();
var network = new NeuralNetwork(1, 5, 1);
for (int i = 0; i < 100000; i++)
{
int x = i % 15;
int y = x * x + 10;
network.Train(x);
network.BackPropagate(y);
}
//Below should output 20, but instead outputs garbage
Console.WriteLine("0 * 0 + 10 = " + network.Compute(0)[0]);
//Below should output 110, but instead outputs garbage
Console.WriteLine("10 * 10 + 10 = " + network.Compute(10)[0]);
//Below should output 410, but instead outputs garbage
Console.WriteLine("20 * 20 + 10 = " + network.Compute(20)[0]);
}
}
public class NeuralNetwork
{
public double LearnRate { get; set; }
public double Momentum { get; set; }
public List<Neuron> InputLayer { get; set; }
public List<Neuron> HiddenLayer { get; set; }
public List<Neuron> OutputLayer { get; set; }
static Random random = new Random();
public NeuralNetwork(int inputSize, int hiddenSize, int outputSize)
{
LearnRate = .9;
Momentum = .04;
InputLayer = new List<Neuron>();
HiddenLayer = new List<Neuron>();
OutputLayer = new List<Neuron>();
for (int i = 0; i < inputSize; i++)
InputLayer.Add(new Neuron());
for (int i = 0; i < hiddenSize; i++)
HiddenLayer.Add(new Neuron(InputLayer));
for (int i = 0; i < outputSize; i++)
OutputLayer.Add(new Neuron(HiddenLayer));
}
public void Train(params double[] inputs)
{
int i = 0;
InputLayer.ForEach(a => a.Value = inputs[i++]);
HiddenLayer.ForEach(a => a.CalculateValue());
OutputLayer.ForEach(a => a.CalculateValue());
}
public double[] Compute(params double[] inputs)
{
Train(inputs);
return OutputLayer.Select(a => a.Value).ToArray();
}
public double CalculateError(params double[] targets)
{
int i = 0;
return OutputLayer.Sum(a => Math.Abs(a.CalculateError(targets[i++])));
}
public void BackPropagate(params double[] targets)
{
int i = 0;
OutputLayer.ForEach(a => a.CalculateGradient(targets[i++]));
HiddenLayer.ForEach(a => a.CalculateGradient());
HiddenLayer.ForEach(a => a.UpdateWeights(LearnRate, Momentum));
OutputLayer.ForEach(a => a.UpdateWeights(LearnRate, Momentum));
}
public static double NextRandom()
{
return 2 * random.NextDouble() - 1;
}
public static double SigmoidFunction(double x)
{
if (x < -45.0) return 0.0;
else if (x > 45.0) return 1.0;
return 1.0 / (1.0 + Math.Exp(-x));
}
public static double SigmoidDerivative(double f)
{
return f * (1 - f);
}
public static double HyperTanFunction(double x)
{
if (x < -10.0) return -1.0;
else if (x > 10.0) return 1.0;
else return Math.Tanh(x);
}
public static double HyperTanDerivative(double f)
{
return (1 - f) * (1 + f);
}
public static double IdentityFunction(double x)
{
return x;
}
public static double IdentityDerivative()
{
return 1;
}
}
public class Neuron
{
public bool IsInput { get { return InputSynapses.Count == 0; } }
public bool IsHidden { get { return InputSynapses.Count != 0 && OutputSynapses.Count != 0; } }
public bool IsOutput { get { return OutputSynapses.Count == 0; } }
public List<Synapse> InputSynapses { get; set; }
public List<Synapse> OutputSynapses { get; set; }
public double Bias { get; set; }
public double BiasDelta { get; set; }
public double Gradient { get; set; }
public double Value { get; set; }
public Neuron()
{
InputSynapses = new List<Synapse>();
OutputSynapses = new List<Synapse>();
Bias = NeuralNetwork.NextRandom();
}
public Neuron(List<Neuron> inputNeurons) : this()
{
foreach (var inputNeuron in inputNeurons)
{
var synapse = new Synapse(inputNeuron, this);
inputNeuron.OutputSynapses.Add(synapse);
InputSynapses.Add(synapse);
}
}
public virtual double CalculateValue()
{
var d = InputSynapses.Sum(a => a.Weight * a.InputNeuron.Value) + Bias;
return Value = IsHidden ? NeuralNetwork.SigmoidFunction(d) : NeuralNetwork.IdentityFunction(d);
}
public virtual double CalculateDerivative()
{
var d = Value;
return IsHidden ? NeuralNetwork.SigmoidDerivative(d) : NeuralNetwork.IdentityDerivative();
}
public double CalculateError(double target)
{
return target - Value;
}
public double CalculateGradient(double target)
{
return Gradient = CalculateError(target) * CalculateDerivative();
}
public double CalculateGradient()
{
return Gradient = OutputSynapses.Sum(a => a.OutputNeuron.Gradient * a.Weight) * CalculateDerivative();
}
public void UpdateWeights(double learnRate, double momentum)
{
var prevDelta = BiasDelta;
BiasDelta = learnRate * Gradient; // * 1
Bias += BiasDelta + momentum * prevDelta;
foreach (var s in InputSynapses)
{
prevDelta = s.WeightDelta;
s.WeightDelta = learnRate * Gradient * s.InputNeuron.Value;
s.Weight += s.WeightDelta + momentum * prevDelta;
}
}
}
public class Synapse
{
public Neuron InputNeuron { get; set; }
public Neuron OutputNeuron { get; set; }
public double Weight { get; set; }
public double WeightDelta { get; set; }
public Synapse(Neuron inputNeuron, Neuron outputNeuron)
{
InputNeuron = inputNeuron;
OutputNeuron = outputNeuron;
Weight = NeuralNetwork.NextRandom();
}
}
}
you use sigmoid as output funnction which in in the range [0-1]
but you target value is double the range is [ 0 - MAX_INT ], i think it is the basic resaon why you are getting NAN。
i update you code , and try to normalize the value in the range in [0-1],
and I can get ouptut like this which is what I expect
I think I am getting close to the truth,I am not sure why this answer is getting vote down
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace NeuralNetwork
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Training Network...");
Random r = new Random();
var network = new NeuralNetwork(1, 3, 1);
for (int k = 0; k < 60; k++)
{
for (int i = 0; i < 1000; i++)
{
double x = i / 1000.0;// r.Next();
double y = 3 * x;
network.Train(x);
network.BackPropagate(y);
}
double output = network.Compute(0.2)[0];
Console.WriteLine(output);
}
//Below should output 10, but instead outputs either a very large number or NaN
/* double output = network.Compute(3)[0];
Console.WriteLine(output);*/
}
}
public class NeuralNetwork
{
public double LearnRate { get; set; }
public double Momentum { get; set; }
public List<Neuron> InputLayer { get; set; }
public List<Neuron> HiddenLayer { get; set; }
public List<Neuron> OutputLayer { get; set; }
static Random random = new Random();
public NeuralNetwork(int inputSize, int hiddenSize, int outputSize)
{
LearnRate = .2;
Momentum = .04;
InputLayer = new List<Neuron>();
HiddenLayer = new List<Neuron>();
OutputLayer = new List<Neuron>();
for (int i = 0; i < inputSize; i++)
InputLayer.Add(new Neuron());
for (int i = 0; i < hiddenSize; i++)
HiddenLayer.Add(new Neuron(InputLayer));
for (int i = 0; i < outputSize; i++)
OutputLayer.Add(new Neuron(HiddenLayer));
}
public void Train(params double[] inputs)
{
int i = 0;
InputLayer.ForEach(a => a.Value = inputs[i++]);
HiddenLayer.ForEach(a => a.CalculateValue());
OutputLayer.ForEach(a => a.CalculateValue());
}
public double[] Compute(params double[] inputs)
{
Train(inputs);
return OutputLayer.Select(a => a.Value).ToArray();
}
public double CalculateError(params double[] targets)
{
int i = 0;
return OutputLayer.Sum(a => Math.Abs(a.CalculateError(targets[i++])));
}
public void BackPropagate(params double[] targets)
{
int i = 0;
OutputLayer.ForEach(a => a.CalculateGradient(targets[i++]));
HiddenLayer.ForEach(a => a.CalculateGradient());
HiddenLayer.ForEach(a => a.UpdateWeights(LearnRate, Momentum));
OutputLayer.ForEach(a => a.UpdateWeights(LearnRate, Momentum));
}
public static double NextRandom()
{
return 2 * random.NextDouble() - 1;
}
public static double SigmoidFunction(double x)
{
if (x < -45.0)
{
return 0.0;
}
else if (x > 45.0)
{
return 1.0;
}
return 1.0 / (1.0 + Math.Exp(-x));
}
public static double SigmoidDerivative(double f)
{
return f * (1 - f);
}
public static double HyperTanFunction(double x)
{
if (x < -10.0) return -1.0;
else if (x > 10.0) return 1.0;
else return Math.Tanh(x);
}
public static double HyperTanDerivative(double f)
{
return (1 - f) * (1 + f);
}
public static double IdentityFunction(double x)
{
return x;
}
public static double IdentityDerivative()
{
return 1;
}
}
public class Neuron
{
public bool IsInput { get { return InputSynapses.Count == 0; } }
public bool IsHidden { get { return InputSynapses.Count != 0 && OutputSynapses.Count != 0; } }
public bool IsOutput { get { return OutputSynapses.Count == 0; } }
public List<Synapse> InputSynapses { get; set; }
public List<Synapse> OutputSynapses { get; set; }
public double Bias { get; set; }
public double BiasDelta { get; set; }
public double Gradient { get; set; }
public double Value { get; set; }
public Neuron()
{
InputSynapses = new List<Synapse>();
OutputSynapses = new List<Synapse>();
Bias = NeuralNetwork.NextRandom();
}
public Neuron(List<Neuron> inputNeurons)
: this()
{
foreach (var inputNeuron in inputNeurons)
{
var synapse = new Synapse(inputNeuron, this);
inputNeuron.OutputSynapses.Add(synapse);
InputSynapses.Add(synapse);
}
}
public virtual double CalculateValue()
{
var d = InputSynapses.Sum(a => a.Weight * a.InputNeuron.Value);// + Bias;
return Value = IsHidden ? NeuralNetwork.SigmoidFunction(d) : NeuralNetwork.IdentityFunction(d);
}
public virtual double CalculateDerivative()
{
var d = Value;
return IsHidden ? NeuralNetwork.SigmoidDerivative(d) : NeuralNetwork.IdentityDerivative();
}
public double CalculateError(double target)
{
return target - Value;
}
public double CalculateGradient(double target)
{
return Gradient = CalculateError(target) * CalculateDerivative();
}
public double CalculateGradient()
{
return Gradient = OutputSynapses.Sum(a => a.OutputNeuron.Gradient * a.Weight) * CalculateDerivative();
}
public void UpdateWeights(double learnRate, double momentum)
{
var prevDelta = BiasDelta;
BiasDelta = learnRate * Gradient; // * 1
Bias += BiasDelta + momentum * prevDelta;
foreach (var s in InputSynapses)
{
prevDelta = s.WeightDelta;
s.WeightDelta = learnRate * Gradient * s.InputNeuron.Value;
s.Weight += s.WeightDelta; //;+ momentum * prevDelta;
}
}
}
public class Synapse
{
public Neuron InputNeuron { get; set; }
public Neuron OutputNeuron { get; set; }
public double Weight { get; set; }
public double WeightDelta { get; set; }
public Synapse(Neuron inputNeuron, Neuron outputNeuron)
{
InputNeuron = inputNeuron;
OutputNeuron = outputNeuron;
Weight = NeuralNetwork.NextRandom();
}
}
}
You really don't need to use a multi-layer network to solve problems of ax + b = y. A single layer perceptron would do the trick.
In fact, for a problem this simple you don't even need to break out the complexity of a real neural network. Check out this blog post:
http://dynamicnotions.blogspot.co.uk/2009/05/linear-regression-in-c.html
I did not analyze your code, it is far too long. But I can give you the answer for the basic question:
Yes, neural networks are well suited for such problems.
In fact, for a f : R -> R in the form of ax+b=y you should use one neuron with linear activation function. No three-layered structure is required, just one neuron is enough. If your code fails in such case, then you have an implementation error, as it is a simple linear regression task solved using gradient descent.

how to use variables of a class from other classes in c#

i need to call variables that are defined in a class
class testclasss
{
public testclass(double[,] values)
{
double[][] rawData = new double[10][];
for (int i = 0; i < 10; i++)
{
rawData[i] = new double[] { values[i, 2], stockvalues[i, 0] }; // if data won't fit into memory, stream through external storage
}
int num = 3;
int max = 30;
}
public int[] checkvalue(double[][] rawData, int num, int maxCount)
{
............
}
}
I have called constructor using
testclass newk = new testclass(values);
now how can i call the function checkvalues(). i tried using newk.num,newk.maxcount but those variables were not recognized.
Just as your class constructor and checkvalues function are public, so too must the properties you wish to access from outside the class. Put these at the top of your class declaration:
public int num {get; set;}
public int max {get; set;}
Then you can access them via newk.num and newk.max.
EDIT: In response to your second comment, I think you might be a bit confused about how functions and properties interact within the same class. This might help:
class TestClass {
private int _num;
private int _max;
private double[][] _rawData;
public TestClass(double[,] values, int num, int max)
{
_num = num;
_max = max;
_rawData = new double[10][];
for (int i = 0; i < 10; i++)
{
_rawData[i] = new double[] { values[i, 2], stockvalues[i, 0] };
}
}
public int[] CheckValues()
{
//Because _num _max and _rawData exist in TestClass, CheckValues() can access them.
//There's no need to pass them into the function - it already knows about them.
//Do some calculations on _num, _max, _rawData.
return Something;
}
}
Then, do this (for example, I don't know what numbers you're actually using):
double[,] values = new double[10,10];
int num = 3;
int max = 30;
Testclass foo = new Testclass(values, num, max);
int[] results = foo.CheckValues();
I believe this is what you're looking for:
class TestClass {
public int num { get; set; }
public int max { get; set; }
public double[][] rawData;
public TestClass(double[,] values)
{
rawData = new double[10][];
for (int i = 0; i < 10; i++)
{
rawData[i] = new double[] { values[i, 2], stockvalues[i, 0] }; // if data won't fit into memory, stream through external storage
}
this.num = 3;
this.max = 30;
}
public int[] CheckValue()
{............}
}
You'll need a property with a get accessor.
Here is a simplified example based on your code:
class testclass
{
private int _num = 0;
private int _max = 0;
public int Num
{
set { _num = value; }
get { return _num; }
}
public int Max
{
set { _max = value; }
get { return _max; }
}
public testclass()
{
Num = 3;
Max = 30;
}
}
//To access Num or Max from a different class:
testclass test = new testclass(null);
Console.WriteLine(test.Num);
Hope that helps.
Using your exact same code, except for public testclass(double[,] values), which I changed to public testfunction(double[,] values)
class testclass
{
public testfunction(double[,] values)
{
double[][] rawData = new double[10][];
for (int i = 0; i < 10; i++)
{
rawData[i] = new double[] { values[i, 2], stockvalues[i, 0] }; // if data won't fit into memory, stream through external storage
}
int num = 3;
int max = 30;
}
public int[] checkvalue(double[][] rawData, int num, int maxCount)
{
............
}
}
Have you tried calling checkvalues() function like this?
testclass.checkvalue();

Categories