How could I genericize a range expansion function? - c#

I have the following class:
public class State {
public DateTime Created { get; set; }
public Double A { get; set; }
public Double B { get; set; }
}
Then I have two states:
State start = new State { Created = DateTime.UtcNow, A = 10.2, B = 20.5 }
State end = new State { Created = DateTime.UtcNow.AddDays(40), A = 1, B = 80 }
I would like to create a List between start and end where the values of A and B evolve in a linear way between their start and end values.
I was able to create a List as follows:
IList<DateTime> dates = new Range<DateTime>(start.Created, end.Created,).Expand(DateInterval.Day, 1)
Range and Expand are a few helpers I created ...
What is the best way to create the values evolution?
My idea would to do something like:
IList<State> states = new Range<State>( // here "tell" how each property would evolve ...)
UPDATE
Range Class
public class Range<T> where T : IComparable<T> {
private T _minimum;
private T _maximum;
public T Minimum { get { return _minimum; } set { value = _minimum; } }
public T Maximum { get { return _maximum; } set { value = _maximum; } }
public Range(T minimum, T maximum) {
_minimum = minimum;
_maximum = maximum;
} // Range
public Boolean Contains(T value) {
return (Minimum.CompareTo(value) <= 0) && (value.CompareTo(Maximum) <= 0);
} // Contains
public Boolean ContainsRange(Range<T> Range) {
return this.IsValid() && Range.IsValid() && this.Contains(Range.Minimum) && this.Contains(Range.Maximum);
} // ContainsRange
public Boolean IsInsideRange(Range<T> Range) {
return this.IsValid() && Range.IsValid() && Range.Contains(this.Minimum) && Range.Contains(this.Maximum);
} // IsInsideRange
public Boolean IsValid() {
return Minimum.CompareTo(Maximum) <= 0;
} // IsValid
public override String ToString() {
return String.Format("[{0} - {1}]", Minimum, Maximum);
} // ToString
} // Range
A few Range Extensions:
public static IEnumerable<Int32> Expand(this Range<Int32> range, Int32 step = 1) {
Int32 current = range.Minimum;
while (current <= range.Maximum) {
yield return current;
current += step;
}
} // Expand
public static IEnumerable<DateTime> Expand(this Range<DateTime> range, TimeSpan span) {
DateTime current = range.Minimum;
while (current <= range.Maximum) {
yield return current;
current = current.Add(span);
}
} // Expand

var magicNumber = 40;
var start = new State {Created = DateTime.UtcNow, A = 10.2, B = 20.5};
var end = new State {Created = DateTime.UtcNow.AddDays(magicNumber), A = 1, B = 80};
var stateList = new List<State>(magicNumber);
for (var i = 0; i < magicNumber; ++i)
{
var newA = start.A + (end.A - start.A) / magicNumber * (i + 1);
var newB = start.B + (end.B - start.B) / magicNumber * (i + 1);
stateList.Add(new State { Created = DateTime.UtcNow.AddDays(i + 1), A = newA, B = newB });
}

So you have moved to the stage of genericization where a "mutation step" injectable method that will apply your interpolation becomes logical. Because you don't constrain far enough to know how to interpolate steps, you may need a factory method to create your Func<T,T> instances.
For example, if there are two properties whose values shift in your T, you would be able to control their rate of change this way.
public class Point
{
int X {get;set;}
int Y {get;set;}
}
public static IEnumerable<T> Expand(this Range<T> range, Func<T,T> stepFunction)
where T: IComparable<T>
{
var current = range.Minimum;
while (range.Contains(current))
{
yield return current;
current = stepFunction(current);
}
}
// in method
var pointRange = new Range<Point>(
new Point{ X=0,Y=0}, new Point{ X=5, Y=20});
foreach (Point p in pointRange.Expand(p => new Point{ p.X + 1, p.Y + 4})

Related

Clusters of Lat, Long that are near to each other

I have a scenario where N number of items were delivered at N locations (lat, long). I want to find group of items which are delivered near to each other(100 meters). Say for example (Item1, Item9,...Item499) were delivered such that they had distance less than 100 between them. Item9 can be more than 100 meters away from Item499 but it will be in that group if it has less than 100 meters distance from any one of the item in the group.
Find all such groups.
I want someone to point out the right algorithm. I'll then work on that. Preferable Language is C#
Below is the algorithm to solve this problem which I found here
public class ClusterCreator
{
public Dictionary<int, SimpleCluster> GetClusters(List<SimpleCluster> clusterList, int maxDistance)
{
//LOAD DATA
//var clusterList = new List<SimpleCluster>(); // LoadSimpleClusterList();
//var latitudeSensitivity = 3;
//var longitutdeSensitivity = 3;
//CLUSTER THE DATA
return ClusterTheData(clusterList, maxDistance);
}
public Dictionary<int, SimpleCluster> ClusterTheData(List<SimpleCluster> clusterList, int maxDistance)
{
//CLUSTER DICTIONARY
var clusterDictionary = new Dictionary<int, SimpleCluster>();
//Add the first node to the cluster list
if (clusterList.Count > 0)
{
clusterDictionary.Add(clusterList[0].ID, clusterList[0]);
}
//ALGORITHM
for (int i = 1; i < clusterList.Count; i++)
{
SimpleCluster combinedCluster = null;
SimpleCluster oldCluster = null;
foreach (var clusterDict in clusterDictionary)
{
//Check if the current item belongs to any of the existing clusters
if (CheckIfInCluster(clusterDict.Value, clusterList[i], maxDistance))
{
//If it belongs to the cluster then combine them and copy the cluster to oldCluster variable;
combinedCluster = CombineClusters(clusterDict.Value, clusterList[i]);
oldCluster = new SimpleCluster(clusterDict.Value);
}
}
//This check means that no suitable clusters were found to combine, so the current item in the list becomes a new cluster.
if (combinedCluster == null)
{
//Adding new cluster to the cluster dictionary
clusterDictionary.Add(clusterList[i].ID, clusterList[i]);
}
else
{
//We have created a combined cluster. Now it is time to remove the old cluster from the dictionary and instead of it add a new cluster.
clusterDictionary.Remove(oldCluster.ID);
clusterDictionary.Add(combinedCluster.ID, combinedCluster);
}
}
return clusterDictionary;
}
public static SimpleCluster CombineClusters(SimpleCluster home, SimpleCluster imposter)
{
//Deep copy of the home object
var combinedCluster = new SimpleCluster(home);
combinedCluster.LAT_LON_LIST.AddRange(imposter.LAT_LON_LIST);
combinedCluster.NAMES.AddRange(imposter.NAMES);
//Combine the data of both clusters
//combinedCluster.LAT_LON_LIST.AddRange(imposter.LAT_LON_LIST);
//combinedCluster.NAMES.AddRange(imposter.NAMES);
//Recalibrate the new center
combinedCluster.LAT_LON_CENTER = new LAT_LONG
{
LATITUDE = ((home.LAT_LON_CENTER.LATITUDE + imposter.LAT_LON_CENTER.LATITUDE) / 2.0),
LONGITUDE = ((home.LAT_LON_CENTER.LONGITUDE + imposter.LAT_LON_CENTER.LONGITUDE) / 2.0)
};
return combinedCluster;
}
public bool CheckIfInCluster(SimpleCluster home, SimpleCluster imposter, int maxDistance)
{
foreach (var item in home.LAT_LON_LIST)
{
var sCoord = new GeoCoordinate(item.LATITUDE, item.LONGITUDE);
var eCoord = new GeoCoordinate(imposter.LAT_LON_CENTER.LATITUDE, imposter.LAT_LON_CENTER.LONGITUDE);
var distance = sCoord.GetDistanceTo(eCoord);
if (distance <= maxDistance)
return true;
}
return false;
//if ((home.LAT_LON_CENTER.LATITUDE + latitudeSensitivity) > imposter.LAT_LON_CENTER.LATITUDE
// && (home.LAT_LON_CENTER.LATITUDE - latitudeSensitivity) < imposter.LAT_LON_CENTER.LATITUDE
// && (home.LAT_LON_CENTER.LONGITUDE + longitutdeSensitivity) > imposter.LAT_LON_CENTER.LONGITUDE
// && (home.LAT_LON_CENTER.LONGITUDE - longitutdeSensitivity) < imposter.LAT_LON_CENTER.LONGITUDE
// )
//{
// return true;
//}
//return false;
}
}
public class SimpleCluster
{
#region Constructors
public SimpleCluster(int id, string name, double latitude, double longitude)
{
ID = id;
LAT_LON_CENTER = new LAT_LONG
{
LATITUDE = latitude,
LONGITUDE = longitude
};
NAMES = new List<string>();
NAMES.Add(name);
LAT_LON_LIST = new List<LAT_LONG>();
LAT_LON_LIST.Add(LAT_LON_CENTER);
}
public SimpleCluster(SimpleCluster old)
{
ID = old.ID;
LAT_LON_CENTER = new LAT_LONG
{
LATITUDE = old.LAT_LON_CENTER.LATITUDE,
LONGITUDE = old.LAT_LON_CENTER.LONGITUDE
};
LAT_LON_LIST = new List<LAT_LONG>(old.LAT_LON_LIST);
NAMES = new List<string>(old.NAMES);
}
#endregion
public int ID { get; set; }
public List<string> NAMES { get; set; }
public LAT_LONG LAT_LON_CENTER { get; set; }
public List<LAT_LONG> LAT_LON_LIST { get; set; }
}
public class LAT_LONG
{
public double LATITUDE { get; set; }
public double LONGITUDE { get; set; }
}

Better ways to keep 2 same variables that is controlled by a external variable as array in a class

I have a class, all the parameters of the class -except one that is detect value- have 2 values that is controlled by the detect value. See the implementation below.
I have a detect value that is controlled by external sources. With this value get are set methods map to 2 variables.
I realized this as array of length 2, but I am curious about other ways. Any recommendations and improvements to the question is appreciated.
I am a new developer in this project. I thought that using singleton like extension like doubleton may be OK, but I am a bit reluctant to change the structure of the project. If I am more confident about better ways, I will be more courageous to request change in the structure of the project.
My first aim to find if there are better ways to do this 2 same variables in a class better than using arrays.
class Fun
{
int detect;
int a[2], b[2], c[2];
void Apply(byte[] ar)
{
detect = ar[0];
if(detect == 1)
{
a[0] = ar[1];
b[0] = ar[2];
c[0] = ar[3];
}
if(detect == 2)
{
a[1] = ar[1];
b[1] = ar[2];
c[1] = ar[3];
}
}
byte [] GetAll(int detect)
{
int [] ar = new int [3];
if(detect = 1)
{
ar[0] = a[0];
ar[1] = b[0];
ar[2] = c[0];
}
if(detect = 2)
{
ar[0] = a[1];
ar[1] = b[1];
ar[2] = c[1];
}
}
GetA(int detect)
{
if(detect == 1)
{
return a[0];
}
if(detect == 2)
{
return a[1];
}
}
SetA(int detect, int val)
{
if(detect == 1)
{
a[0] = val;
}
if(detect == 2)
{
a[1] = val;
}
}
// GetB, GetC, SetB, SetC are similar
}
You can use a 2 dimensional array and your methods can be much simpler:
class Fun
{
int detect;
int a[2][3];
void Apply(byte[] ar)
{
detect = ar[0];
for (int i = 0; i < 3; i++)
a[detect - 1][i] = ar[i];
}
public int[] GetAll(int detect)
{
int[] ar = new int[3];
for (int i = 0; i < 3; i++)
ar[i] = a[detect - 1][i];
return ar;
}
GetA(int detect)
{
return a[detect - 1][0];
}
SetA(int detect, int val)
{
a[detect - 1][0] = val;
}
GetB(int detect)
{
return a[detect - 1][1];
}
SetB(int detect, int val)
{
a[detect - 1][1] = val;
}
GetC(int detect)
{
return a[detect - 1][2];
}
SetC(int detect, int val)
{
a[detect - 1][2] = val;
}
}
Note: if you provide a value other than 1 or 2 for detect you can expect an exception to be thrown.
A different and probably more sensible approach would be to have two instances of a simpler class:
public class Fun
{
public int A { get; set; }
public int B { get; set; }
public int C { get; set; }
public void Apply(byte[] ar)
{
A = ar[0];
B = ar[1];
C = ar[2];
}
public byte[] GetAll()
{
byte[] ar = new byte[3];
ar[0] = A;
ar[1] = B;
ar[2] = C;
}
}
public class NotFun
{
Fun fun1 = new Fun();
Fun fun2 = new Fun();
void Something()
{
fun1.Apply(new byte[3] { 1, 2, 3 });
int a = fun1.A;
}
}

Calculating Aroon Indicator Series

I'm trying to build a class to create Aroon series. But it seems I don't understand the steps well. I'm not sure about for what purpose I have to use the period parameter.
Here is my first attempt:
/// <summary>
/// Aroon
/// </summary>
public class Aroon : IndicatorCalculatorBase
{
public override List<Ohlc> OhlcList { get; set; }
public int Period { get; set; }
public Aroon(int period)
{
this.Period = period;
}
/// <summary>
/// Aroon up: {((number of periods) - (number of periods since highest high)) / (number of periods)} x 100
/// Aroon down: {((number of periods) - (number of periods since lowest low)) / (number of periods)} x 100
/// </summary>
/// <see cref="http://www.investopedia.com/ask/answers/112814/what-aroon-indicator-formula-and-how-indicator-calculated.asp"/>
/// <returns></returns>
public override IIndicatorSerie Calculate()
{
AroonSerie aroonSerie = new AroonSerie();
int indexToProcess = 0;
while (indexToProcess < this.OhlcList.Count)
{
List<Ohlc> tempOhlc = this.OhlcList.Skip(indexToProcess).Take(Period).ToList();
indexToProcess += tempOhlc.Count;
for (int i = 0; i < tempOhlc.Count; i++)
{
int highestHighIndex = 0, lowestLowIndex = 0;
double highestHigh = tempOhlc.Min(x => x.High), lowestLow = tempOhlc.Max(x => x.Low);
for (int j = 0; j < i; j++)
{
if (tempOhlc[j].High > highestHigh)
{
highestHighIndex = j;
highestHigh = tempOhlc[j].High;
}
if (tempOhlc[j].Low < lowestLow)
{
lowestLowIndex = j;
lowestLow = tempOhlc[j].Low;
}
}
int up = ((this.Period - (i - highestHighIndex)) / this.Period) * 100;
aroonSerie.Up.Add(up);
int down = ((this.Period - (i - lowestLowIndex)) / this.Period) * 100;
aroonSerie.Down.Add(down);
}
}
return aroonSerie;
}
}
Is there anyone else tried to do that before?
Here is the csv file I use:
https://drive.google.com/file/d/0Bwv_-8Q17wGaRDVCa2FhMWlyRUk/view
But the result sets for Aroon up and down don't match with the results of aroon function in TTR package for R.
table <- read.csv("table.csv", header = TRUE, sep = ",")
trend <- aroon(table[,c("High", "Low")], n=5)
View(trend)
Screenshot of R result:
Thanks in advance,
#anilca,
for full Disclosure, I learned alot of things to answer your question(i knew nothing in the Finetec...). Thank you! it was an interesting experience!
There are several problems in your implementation:
In the statements:
i - highestHighIndex
and
i - lowestLowIndex
the variable "i" is less or equal to highestHighIndex, lowestLowIndex
so statements:
this.Period - (i - highestHighIndex)
and
this.Period - (i - lowestLowIndex)
will returns the wrong values (most of the time...)
Aroon up and down both of them are percentage, therefore "int" is a wrong data structure.
Because all variables in:
(this.Period - (i - highestHighIndex)) / this.Period)
and
((this.Period - (i - lowestLowIndex)) / this.Period)
are integers you won't recieve the right value.
one more thing the numbers in your excel are sorted from the newest to the oldest.(it effects on the R package)
I've implemented the algorithm based on your code(and your data order...)
public class Aroon : IndicatorCalculatorBase
{
public override List<OhlcSample> OhlcList { get; set; }
private readonly int _period;
public int Period
{
get { return _period; }
}
public Aroon(int period)
{
_period = period;
}
public override IIndicatorSerie Calculate()
{
var aroonSerie = new AroonSerie();
for (var i = _period; i < OhlcList.Count; i++)
{
var aroonUp = CalculateAroonUp(i);
var aroonDown = CalculateAroonDown(i);
aroonSerie.Down.Add(aroonDown);
aroonSerie.Up.Add(aroonUp);
}
return aroonSerie;
}
private double CalculateAroonUp(int i)
{
var maxIndex = FindMax(i - _period, i);
var up = CalcAroon(i - maxIndex);
return up;
}
private double CalculateAroonDown(int i)
{
var minIndex = FindMin(i - _period, i);
var down = CalcAroon(i - minIndex);
return down;
}
private double CalcAroon(int numOfDays)
{
var result = ((_period - numOfDays)) * ((double)100 / _period);
return result;
}
private int FindMin(int startIndex, int endIndex)
{
var min = double.MaxValue;
var index = startIndex;
for (var i = startIndex; i <= endIndex; i++)
{
if (min < OhlcList[i].Low)
continue;
min = OhlcList[i].Low;
index = i;
}
return index;
}
private int FindMax(int startIndex, int endIndex)
{
var max = double.MinValue;
var index = startIndex;
for (var i = startIndex; i <= endIndex; i++)
{
if (max > OhlcList[i].High)
continue;
max = OhlcList[i].High;
index = i;
}
return index;
}
}
public abstract class IndicatorCalculatorBase
{
public abstract List<OhlcSample> OhlcList { get; set; }
public abstract IIndicatorSerie Calculate();
}
public interface IIndicatorSerie
{
List<double> Up { get; }
List<double> Down { get; }
}
internal class AroonSerie : IIndicatorSerie
{
public List<double> Up { get; private set; }
public List<double> Down { get; private set; }
public AroonSerie()
{
Up = new List<double>();
Down = new List<double>();
}
}
public class OhlcSample
{
public double High { get; private set; }
public double Low { get; private set; }
public OhlcSample(double high, double low)
{
High = high;
Low = low;
}
}
Use this test method for debugging:
private Aroon _target;
[TestInitialize]
public void TestInit()
{
_target=new Aroon(5)
{
OhlcList = new List<OhlcSample>
{
new OhlcSample(166.90, 163.65),
new OhlcSample(165.00, 163.12),
new OhlcSample(165.91, 163.21),
new OhlcSample(167.29, 165.11),
new OhlcSample(169.99, 166.84),
new OhlcSample(170.92, 167.90),
new OhlcSample(168.47, 165.90),
new OhlcSample(167.75, 165.75),
new OhlcSample(166.14, 161.89),
new OhlcSample(164.77, 161.44),
new OhlcSample(163.19, 161.49),
new OhlcSample(162.50, 160.95),
new OhlcSample(163.25, 158.84),
new OhlcSample(159.20, 157.00),
new OhlcSample(159.33, 156.14),
new OhlcSample(160.00, 157.00),
new OhlcSample(159.35, 158.07),
new OhlcSample(160.70, 158.55),
new OhlcSample(160.90, 157.66),
new OhlcSample(164.38, 158.45),
new OhlcSample(167.75, 165.70),
new OhlcSample(168.93, 165.60),
new OhlcSample(165.73, 164.00),
new OhlcSample(167.00, 164.66),
new OhlcSample(169.35, 165.01),
new OhlcSample(168.12, 164.65),
new OhlcSample(168.89, 165.79),
new OhlcSample(168.65, 165.57),
new OhlcSample(170.85, 166.00),
new OhlcSample(171.61, 169.10)
}
};
}
[TestMethod]
public void JustToHelpYou()
{
var result = _target.Calculate();
var expectedUp = new List<double>()
{
100,80,60,40,20,0,0,0, 0,0,40,20,0,100,100,100,100,80, 60,100,80,60,40,100,100
};
var expectedDown = new List<double>
{
20,0,0,100,100,80,100,100,100,100,80,60,40,20,0,0,40,20,0,0,40,20,0,40,20
};
Assert.IsTrue( result.Up.SequenceEqual(expectedUp));
Assert.IsTrue( result.Down.SequenceEqual(expectedDown));
}
Just Add implementation of method HighestBarNum and LowestBarnum from your code
public class Aroon
{
public bool AroonDown
{
get;
set;
}
public double Period
{
get;
set;
}
public Aroon()
{
}
public IList<double> Execute(IList<double> src)
{
if (!this.AroonDown)
{
return this.ExecuteUp(src);
}
return this.ExecuteDown(src);
}
public IList<double> ExecuteDown(IList<double> src)
{
double[] period = new double[src.Count];
for (int i = 1; i < src.Count; i++)
{
double num = LowestBarNum(src, i, Period);
period[i] = 100 * (Period - num) / Period;
}
return period;
}
public IList<double> ExecuteUp(IList<double> src)
{
double[] period = new double[src.Count];
for (int i = 1; i < src.Count; i++)
{
double num = HighestBarNum(src, i, Period);
period[i] = 100 * ((Period - num) / Period;
}
return period;
}}
class AroonData
{
public double AroonUp;
public double AroonDown;
}
Calculation Class:
class AroonCalculationData
{
public double PeriodHigh;
public double PeriodLow;
public double SetAroonUp(List<MarketData> period, double lastHigh)
{
/*reverse the set so we can look at it from the current tick
on back, and ticksSinceHigh will set correctly*/
period.Reverse();
int ticksSinceHigh = 0;
double high = 0.0;//Set 0 so anything will be higher
for (int i = 0; i < period.Count; i++)
{
if (period[i].high > high)
{
high = period[i].high;
ticksSinceHigh = i;
}
}
/*This bit if for if the high just ticked out
of List<MarketData>.
Including the current tick (i = 0), List<MarketData> period
only looks back (period - 1) days.
This Checks to see if the last high is still in the set. if it's
not, and is still the high for the period, then ticksSinceHigh
is set to (period)*/
PeriodHigh = high;
if (PeriodHigh < lastHigh)
{
ticksSinceHigh = period.Count;
}
/*Aroon-Up Formula*/
return (double)(period.Count - ticksSinceHigh ) / (double)period.Count * 100.0;
}
//ASIDE FROM LOOKING FOR LOWS INSTEAD OF HIGHS, SAME AS AROON-UP
public double SetAroonDown(List<MarketData> period, double lastLow)
{
period.Reverse();
int daysSinceLow = 0;
double low = double.MaxValue;//Set max so anything will be lower
for (int i = 0; i < period.Count; i++)
{
if (period[i].low < low)
{
low = period[i].low;
daysSinceLow = i;
}
}
PeriodLow = low;
if (PeriodLow > lastLow)
{
daysSinceLow = period.Count;
}
return (double)(period.Count - daysSinceLow) / (double)period.Count * 100.0;
}
}
Calling code:
public AroonData[] Aroon(List<MarketData> marketData, int period)
{
AroonCalculationData[] calculationData = new AroonCalculationData[marketData.Count]
AroonData[] aroon= new AroonData[marketData.Count]
for (int i = period; i < marketData.Count; i++)
{
/*GetRange(i - period + 1, period) add 1 so that the current tick is
included in look back.
For example, if (period = 10) the first loop (i = 10) then (i -
period + 1) = 1 the set will be marketData 1 - 10.*/
/*calculationData[i - 1].PeriodHigh and calculationData[i - 1].PeriodLow
are for looking back at the last tick's look back period high and
low*/
data[i].AroonUp = calculationData[i].SetAroonUp(marketData.GetRange(i - period + 1, period), calculationData[i - 1].PeriodHigh);
data[i].AroonDown = calculationData[i].SetAroonDown(marketData.GetRange(i - period + 1, period), calculationData[i - 1].PeriodLow);
}
}
Side note:
One problem I had was comparing my data to TD Ameritrades Aroon, until i figured out their period is really period-1, so if you're comparing to TD keep that in mind.

Using Neural Network to solve Y = X * X + b type formula

Update 1/6/2014: I've updated the question so that I'm trying to solve a non-linear equation. As many of you pointed out I didn't need the extra complexity (hidden-layer, sigmoid function, etc) in order to solve a non-linear problem.
Also, I realize I could probably solve even non-linear problems like this using other means besides neural networks. I'm not trying to write the most efficient code or the least amount of code. This is purely for me to better learn neural networks.
I've created my own implementation of back propagated neural network.
It is working fine when trained to solve simple XOR operations.
However now I want to adapt it & train it to solve Y = X * X + B type formulas, but I'm not getting expected results. After training the network does not calculate the correct answers. Are neural networks well-suited for solving algebra equations like this? I realize my example is trivial I'm just trying to learn more about neural networks and their capabilities.
My hidden layer is using a sigmoid activation function and my output layer is using an identity function.
If you could analyze my code and point out any errors I'd be grateful.
Here is my full code (C# .NET):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace NeuralNetwork
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Training Network...");
Random r = new Random();
var network = new NeuralNetwork(1, 5, 1);
for (int i = 0; i < 100000; i++)
{
int x = i % 15;
int y = x * x + 10;
network.Train(x);
network.BackPropagate(y);
}
//Below should output 20, but instead outputs garbage
Console.WriteLine("0 * 0 + 10 = " + network.Compute(0)[0]);
//Below should output 110, but instead outputs garbage
Console.WriteLine("10 * 10 + 10 = " + network.Compute(10)[0]);
//Below should output 410, but instead outputs garbage
Console.WriteLine("20 * 20 + 10 = " + network.Compute(20)[0]);
}
}
public class NeuralNetwork
{
public double LearnRate { get; set; }
public double Momentum { get; set; }
public List<Neuron> InputLayer { get; set; }
public List<Neuron> HiddenLayer { get; set; }
public List<Neuron> OutputLayer { get; set; }
static Random random = new Random();
public NeuralNetwork(int inputSize, int hiddenSize, int outputSize)
{
LearnRate = .9;
Momentum = .04;
InputLayer = new List<Neuron>();
HiddenLayer = new List<Neuron>();
OutputLayer = new List<Neuron>();
for (int i = 0; i < inputSize; i++)
InputLayer.Add(new Neuron());
for (int i = 0; i < hiddenSize; i++)
HiddenLayer.Add(new Neuron(InputLayer));
for (int i = 0; i < outputSize; i++)
OutputLayer.Add(new Neuron(HiddenLayer));
}
public void Train(params double[] inputs)
{
int i = 0;
InputLayer.ForEach(a => a.Value = inputs[i++]);
HiddenLayer.ForEach(a => a.CalculateValue());
OutputLayer.ForEach(a => a.CalculateValue());
}
public double[] Compute(params double[] inputs)
{
Train(inputs);
return OutputLayer.Select(a => a.Value).ToArray();
}
public double CalculateError(params double[] targets)
{
int i = 0;
return OutputLayer.Sum(a => Math.Abs(a.CalculateError(targets[i++])));
}
public void BackPropagate(params double[] targets)
{
int i = 0;
OutputLayer.ForEach(a => a.CalculateGradient(targets[i++]));
HiddenLayer.ForEach(a => a.CalculateGradient());
HiddenLayer.ForEach(a => a.UpdateWeights(LearnRate, Momentum));
OutputLayer.ForEach(a => a.UpdateWeights(LearnRate, Momentum));
}
public static double NextRandom()
{
return 2 * random.NextDouble() - 1;
}
public static double SigmoidFunction(double x)
{
if (x < -45.0) return 0.0;
else if (x > 45.0) return 1.0;
return 1.0 / (1.0 + Math.Exp(-x));
}
public static double SigmoidDerivative(double f)
{
return f * (1 - f);
}
public static double HyperTanFunction(double x)
{
if (x < -10.0) return -1.0;
else if (x > 10.0) return 1.0;
else return Math.Tanh(x);
}
public static double HyperTanDerivative(double f)
{
return (1 - f) * (1 + f);
}
public static double IdentityFunction(double x)
{
return x;
}
public static double IdentityDerivative()
{
return 1;
}
}
public class Neuron
{
public bool IsInput { get { return InputSynapses.Count == 0; } }
public bool IsHidden { get { return InputSynapses.Count != 0 && OutputSynapses.Count != 0; } }
public bool IsOutput { get { return OutputSynapses.Count == 0; } }
public List<Synapse> InputSynapses { get; set; }
public List<Synapse> OutputSynapses { get; set; }
public double Bias { get; set; }
public double BiasDelta { get; set; }
public double Gradient { get; set; }
public double Value { get; set; }
public Neuron()
{
InputSynapses = new List<Synapse>();
OutputSynapses = new List<Synapse>();
Bias = NeuralNetwork.NextRandom();
}
public Neuron(List<Neuron> inputNeurons) : this()
{
foreach (var inputNeuron in inputNeurons)
{
var synapse = new Synapse(inputNeuron, this);
inputNeuron.OutputSynapses.Add(synapse);
InputSynapses.Add(synapse);
}
}
public virtual double CalculateValue()
{
var d = InputSynapses.Sum(a => a.Weight * a.InputNeuron.Value) + Bias;
return Value = IsHidden ? NeuralNetwork.SigmoidFunction(d) : NeuralNetwork.IdentityFunction(d);
}
public virtual double CalculateDerivative()
{
var d = Value;
return IsHidden ? NeuralNetwork.SigmoidDerivative(d) : NeuralNetwork.IdentityDerivative();
}
public double CalculateError(double target)
{
return target - Value;
}
public double CalculateGradient(double target)
{
return Gradient = CalculateError(target) * CalculateDerivative();
}
public double CalculateGradient()
{
return Gradient = OutputSynapses.Sum(a => a.OutputNeuron.Gradient * a.Weight) * CalculateDerivative();
}
public void UpdateWeights(double learnRate, double momentum)
{
var prevDelta = BiasDelta;
BiasDelta = learnRate * Gradient; // * 1
Bias += BiasDelta + momentum * prevDelta;
foreach (var s in InputSynapses)
{
prevDelta = s.WeightDelta;
s.WeightDelta = learnRate * Gradient * s.InputNeuron.Value;
s.Weight += s.WeightDelta + momentum * prevDelta;
}
}
}
public class Synapse
{
public Neuron InputNeuron { get; set; }
public Neuron OutputNeuron { get; set; }
public double Weight { get; set; }
public double WeightDelta { get; set; }
public Synapse(Neuron inputNeuron, Neuron outputNeuron)
{
InputNeuron = inputNeuron;
OutputNeuron = outputNeuron;
Weight = NeuralNetwork.NextRandom();
}
}
}
you use sigmoid as output funnction which in in the range [0-1]
but you target value is double the range is [ 0 - MAX_INT ], i think it is the basic resaon why you are getting NAN。
i update you code , and try to normalize the value in the range in [0-1],
and I can get ouptut like this which is what I expect
I think I am getting close to the truth,I am not sure why this answer is getting vote down
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace NeuralNetwork
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Training Network...");
Random r = new Random();
var network = new NeuralNetwork(1, 3, 1);
for (int k = 0; k < 60; k++)
{
for (int i = 0; i < 1000; i++)
{
double x = i / 1000.0;// r.Next();
double y = 3 * x;
network.Train(x);
network.BackPropagate(y);
}
double output = network.Compute(0.2)[0];
Console.WriteLine(output);
}
//Below should output 10, but instead outputs either a very large number or NaN
/* double output = network.Compute(3)[0];
Console.WriteLine(output);*/
}
}
public class NeuralNetwork
{
public double LearnRate { get; set; }
public double Momentum { get; set; }
public List<Neuron> InputLayer { get; set; }
public List<Neuron> HiddenLayer { get; set; }
public List<Neuron> OutputLayer { get; set; }
static Random random = new Random();
public NeuralNetwork(int inputSize, int hiddenSize, int outputSize)
{
LearnRate = .2;
Momentum = .04;
InputLayer = new List<Neuron>();
HiddenLayer = new List<Neuron>();
OutputLayer = new List<Neuron>();
for (int i = 0; i < inputSize; i++)
InputLayer.Add(new Neuron());
for (int i = 0; i < hiddenSize; i++)
HiddenLayer.Add(new Neuron(InputLayer));
for (int i = 0; i < outputSize; i++)
OutputLayer.Add(new Neuron(HiddenLayer));
}
public void Train(params double[] inputs)
{
int i = 0;
InputLayer.ForEach(a => a.Value = inputs[i++]);
HiddenLayer.ForEach(a => a.CalculateValue());
OutputLayer.ForEach(a => a.CalculateValue());
}
public double[] Compute(params double[] inputs)
{
Train(inputs);
return OutputLayer.Select(a => a.Value).ToArray();
}
public double CalculateError(params double[] targets)
{
int i = 0;
return OutputLayer.Sum(a => Math.Abs(a.CalculateError(targets[i++])));
}
public void BackPropagate(params double[] targets)
{
int i = 0;
OutputLayer.ForEach(a => a.CalculateGradient(targets[i++]));
HiddenLayer.ForEach(a => a.CalculateGradient());
HiddenLayer.ForEach(a => a.UpdateWeights(LearnRate, Momentum));
OutputLayer.ForEach(a => a.UpdateWeights(LearnRate, Momentum));
}
public static double NextRandom()
{
return 2 * random.NextDouble() - 1;
}
public static double SigmoidFunction(double x)
{
if (x < -45.0)
{
return 0.0;
}
else if (x > 45.0)
{
return 1.0;
}
return 1.0 / (1.0 + Math.Exp(-x));
}
public static double SigmoidDerivative(double f)
{
return f * (1 - f);
}
public static double HyperTanFunction(double x)
{
if (x < -10.0) return -1.0;
else if (x > 10.0) return 1.0;
else return Math.Tanh(x);
}
public static double HyperTanDerivative(double f)
{
return (1 - f) * (1 + f);
}
public static double IdentityFunction(double x)
{
return x;
}
public static double IdentityDerivative()
{
return 1;
}
}
public class Neuron
{
public bool IsInput { get { return InputSynapses.Count == 0; } }
public bool IsHidden { get { return InputSynapses.Count != 0 && OutputSynapses.Count != 0; } }
public bool IsOutput { get { return OutputSynapses.Count == 0; } }
public List<Synapse> InputSynapses { get; set; }
public List<Synapse> OutputSynapses { get; set; }
public double Bias { get; set; }
public double BiasDelta { get; set; }
public double Gradient { get; set; }
public double Value { get; set; }
public Neuron()
{
InputSynapses = new List<Synapse>();
OutputSynapses = new List<Synapse>();
Bias = NeuralNetwork.NextRandom();
}
public Neuron(List<Neuron> inputNeurons)
: this()
{
foreach (var inputNeuron in inputNeurons)
{
var synapse = new Synapse(inputNeuron, this);
inputNeuron.OutputSynapses.Add(synapse);
InputSynapses.Add(synapse);
}
}
public virtual double CalculateValue()
{
var d = InputSynapses.Sum(a => a.Weight * a.InputNeuron.Value);// + Bias;
return Value = IsHidden ? NeuralNetwork.SigmoidFunction(d) : NeuralNetwork.IdentityFunction(d);
}
public virtual double CalculateDerivative()
{
var d = Value;
return IsHidden ? NeuralNetwork.SigmoidDerivative(d) : NeuralNetwork.IdentityDerivative();
}
public double CalculateError(double target)
{
return target - Value;
}
public double CalculateGradient(double target)
{
return Gradient = CalculateError(target) * CalculateDerivative();
}
public double CalculateGradient()
{
return Gradient = OutputSynapses.Sum(a => a.OutputNeuron.Gradient * a.Weight) * CalculateDerivative();
}
public void UpdateWeights(double learnRate, double momentum)
{
var prevDelta = BiasDelta;
BiasDelta = learnRate * Gradient; // * 1
Bias += BiasDelta + momentum * prevDelta;
foreach (var s in InputSynapses)
{
prevDelta = s.WeightDelta;
s.WeightDelta = learnRate * Gradient * s.InputNeuron.Value;
s.Weight += s.WeightDelta; //;+ momentum * prevDelta;
}
}
}
public class Synapse
{
public Neuron InputNeuron { get; set; }
public Neuron OutputNeuron { get; set; }
public double Weight { get; set; }
public double WeightDelta { get; set; }
public Synapse(Neuron inputNeuron, Neuron outputNeuron)
{
InputNeuron = inputNeuron;
OutputNeuron = outputNeuron;
Weight = NeuralNetwork.NextRandom();
}
}
}
You really don't need to use a multi-layer network to solve problems of ax + b = y. A single layer perceptron would do the trick.
In fact, for a problem this simple you don't even need to break out the complexity of a real neural network. Check out this blog post:
http://dynamicnotions.blogspot.co.uk/2009/05/linear-regression-in-c.html
I did not analyze your code, it is far too long. But I can give you the answer for the basic question:
Yes, neural networks are well suited for such problems.
In fact, for a f : R -> R in the form of ax+b=y you should use one neuron with linear activation function. No three-layered structure is required, just one neuron is enough. If your code fails in such case, then you have an implementation error, as it is a simple linear regression task solved using gradient descent.

Compare properties of two objects fast c#

I have two objects of same type with different values:
public class Itemi
{
public Itemi()
{
}
public int Prop1Min { get; set; }
public int Prop1Max { get; set; }
public int Prop2Min { get; set; }
public int Prop2Max { get; set; }
public int Prop3Min { get; set; }
public int Prop3Max { get; set; }
...................................
public int Prop25Min { get; set; }
public int Prop25Max { get; set; }
}
Now I instantiate two objects of this type and add some values to their properties.
Itemi myItem1 = new Itemi();
myItem1.Prop1Min = 1;
myItem1.Prop1Max = 4;
myItem1.Prop2Min = 2;
myItem1.Prop2Max = 4;
myItem1.Prop3Min = -1;
myItem1.Prop3Max = 5;
.............................
myItem1.Prop25Min = 1;
myItem1.Prop25Max = 5;
Itemi myItem2 = new Itemi();
myItem2.Prop1Min = 1;
myItem2.Prop1Max = 5;
myItem2.Prop2Min = -10;
myItem2.Prop2Max = 3;
myItem2.Prop3Min = 0;
myItem2.Prop3Max = 2;
................................
myItem2.Prop25Min = 3;
myItem2.Prop25Max = 6;
What is the best and fastest way to do this comparison:
take each properties from myItem1 and check if values from Prop1-25 Min and Max are within the range values of myItem2 Prop1-25 Min and Max
Example:
myItem1.Prop1Min = 1
myItem1.Prop1Max = 4
myItem2.Prop1Min = 1
myItem2.Prop1Max = 5
this is True because mtItem1 Prop1 min and max are within the range of myItem2 min and max.
the condition should be AND in between all properties so in the end after we check all 25 properties if all of them are within the range of the second object we return true.
Is there a fast way to do this using Linq or other algorithm except the traditional if-else?
I would refactor the properties to be more along the lines of:
public class Item
{
public List<Range> Ranges { get; set; }
}
public class Range
{
public int Min { get; set; }
public int Max { get; set; }
}
Then your comparison method could be:
if (myItem1.Ranges.Count != myItem2.Ranges.Count)
{
return false;
}
for (int i = 0; i < myItem1.Ranges.Count; i++)
{
if (myItem1.Ranges[i].Min < myItem2.Ranges[i].Min ||
myItem1.Ranges[i].Max > myItem2.Ranges[i].Max)
{
return false;
}
}
return true;
Otherwise you will have to use Reflection, which is anything but fast.
Linq is using standart statements like if...then, for each and other, there is no magic :)
If the final goal only to compare, without needing to say, which properties are not in the range, then you not need to check them all, on the first unequals you can end checking.
Because you have so much properties, you must think about saving it in Dictionary, or List, for example. Or to use dynamic properties (ITypedList), if it will use for binding.
You really should do something like Ginosaji proposed.
But if you want to go with your current data model, here is how I would solve it. Happy typing. :)
public static bool RangeIsContained(int outerMin, int outerMax, int innerMin, int innerMax)
{
return (outerMin <= innerMin && outerMax >= innerMax);
}
public bool IsContained(Itemi outer, Itemi inner)
{
return RangeIsContained(outer.Prop1Min, outer.Prop1Max, inner.Prop1Min, inner.Prop1Max)
&& RangeIsContained(outer.Prop2Min, outer.Prop2Max, inner.Prop2Min, inner.Prop2Max)
// ...
&& RangeIsContained(outer.Prop25Min, outer.Prop25Max, inner.Prop25Min, inner.Prop25Max);
}
With your data model this is basically the only way to go except for reflection (slow!). LINQ cannot help you because your data is not enumerable.
For the sake of completeness, here is a LINQ solution (but it's less performant and less readable than Ginosaji's solution!)
public class Range
{
public int Min { get; set; }
public int Max { get; set; }
public static bool IsContained(Range super, Range sub)
{
return super.Min <= sub.Min
&& super.Max >= sub.Max;
}
}
public class Itemi
{
public Itemi()
{
properties = new Range[25];
for (int i = 0; i < properties.Length; i++)
{
properties[i] = new Range();
}
}
private Range[] properties;
public IEnumerable<Range> Properties { get { return properties; } }
public static bool IsContained(Itemi super, Itemi sub)
{
return super.properties
.Zip(sub.properties, (first, second) => Tuple.Create(first, second))
.All((entry) => Range.IsContained(entry.Item1, entry.Item2));
}
public Range Prop1
{
get { return properties[0]; }
set { properties[0] = value; }
}
public Range Prop2
{
get { return properties[1]; }
set { properties[1] = value; }
}
// ...
}

Categories