I have a c# Windows forms application that generates a stacked line graph using the standard MS Chart control, as in the below example.
Is there a way of "smoothing" the lines by formatting the series or some other property?
Looking at MSDN and Google I can't seem to find a way to do this, in Excel there a series.Smooth property...
I have I missed it or is it not possible?
If you liked the smooth look of the SplineAreas you can calculate the necessary values to get just that look:
A few notes:
I have reversed the order of the series; many ways to get the colors right.. (Instead one probably should accumulate in reverse)
The stacked DataPoints need to be aligned, as usual and any empty DataPoints should have their Y-Values to be 0.
Of course in the new series you can't access the actual data values any longer as you now have the accumulated values instead; at least not without reversing the calulation. So if need them keep them around somewhere. The new DataPoints' Tag property is one option..
You can control the ' smoothness' of each Series by setting its LineTension custom attribute:
chart2.Series[0].SetCustomProperty("LineTension", "0.15");
Here is the full example code that creates the above screenshots calculating a 'stacked' SplineArea chart2 from the data in a StackedArea chart1:
// preparation
for (int i = 0; i < 4; i++)
{
Series s = chart1.Series.Add("S" + i);
s.ChartType = SeriesChartType.StackedArea;
Series s2 = chart2.Series.Add("S" + i);
s2.ChartType = SeriesChartType.SplineArea;
}
for (int i = 0; i < 30; i++) // some test data
{
chart1.Series[0].Points.AddXY(i, Math.Abs(Math.Sin(i / 8f)));
chart1.Series[1].Points.AddXY(i, Math.Abs(Math.Sin(i / 4f)));
chart1.Series[2].Points.AddXY(i, Math.Abs(Math.Sin(i / 1f)));
chart1.Series[3].Points.AddXY(i, Math.Abs(Math.Sin(i / 0.5f)));
}
// the actual calculations:
int sCount = chart1.Series.Count;
for (int i = 0; i < chart1.Series[0].Points.Count ; i++)
{
double v = chart1.Series[0].Points[i].YValues[0];
chart2.Series[sCount - 1].Points.AddXY(i, v);
for (int j = 1; j < sCount; j++)
{
v += chart1.Series[j].Points[i].YValues[0];
chart2.Series[sCount - j - 1].Points.AddXY(i, v);
}
}
// optionally control the tension:
chart2.Series[0].SetCustomProperty("LineTension", "0.15");
Related
I'm using the Winform DataVisualization.Charting.Chart control. I have added 3 data series into the default chartarea. Each series has an integer y value and a Datetime x value.
If I show any one data series, it works as expected; however, if I combine the data series the dates overlap and are not in chronological order thus making the chart useless. I've hard-coded the values just for a sanity check but still see the same issue.
If anyone knows for a way for this chart to have a single x axis timeline that all data series will just plot the y value onto, it would be greatly appreciated. I've run out of things to try. (Image include below...)
chrtSessions.ChartAreas.Clear();
chrtSessions.ChartAreas.Add(new ChartArea("Default"));
chrtSessions.Series.Clear();
chrtSessions.Titles.Clear();
chrtSessions.Titles.Add("Hours Left For Licensure");
chrtSessions.ChartAreas["Default"].AxisY.Title = "Hours";
chrtSessions.ChartAreas["Default"].AxisX.Title = "Dates";
chrtSessions.ChartAreas["Default"].AxisX.IntervalType = DateTimeIntervalType.Days;
chrtSessions.ChartAreas["Default"].AxisX.LabelStyle.Format = "DD/MM/YYYY";
chrtSessions.Series.Add("Individual");
chrtSessions.Series["Individual"].AxisLabel = "Individual Sessions";
chrtSessions.Series["Individual"].ChartType = SeriesChartType.Line;
chrtSessions.Series["Individual"].BorderWidth = 4;
chrtSessions.Series["Individual"].XValueType = ChartValueType.Date;
for (int index = 0; index < individualData.hours.Count; index++)
{
chrtSessions.Series["Individual"].Points.AddXY(individualData.dates[index].ToShortDateString(), individualData.hours[index]);
}
chrtSessions.Series.Add("Relational");
chrtSessions.Series["Relational"].AxisLabel = "Relational Sessions";
chrtSessions.Series["Relational"].ChartType = SeriesChartType.Line;
chrtSessions.Series["Relational"].BorderWidth = 4;
chrtSessions.Series["Relational"].XValueType = ChartValueType.Date;
for (int index = 0; index < relationalData.hours.Count; index++)
{
chrtSessions.Series["Relational"].Points.AddXY(relationalData.dates[index].ToShortDateString(), relationalData.hours[index]);
}
chrtSessions.Series.Add("Supervision");
chrtSessions.Series["Supervision"].AxisLabel = "Supervision Sessions";
chrtSessions.Series["Supervision"].ChartType= SeriesChartType.Line;
chrtSessions.Series["Supervision"].BorderWidth = 4;
chrtSessions.Series["Supervision"].XValueType = ChartValueType.Date;
for (int index = 0; index < supervisionData.hours.Count; index++)
{
chrtSessions.Series["Supervision"].Points.AddXY(supervisionData.dates[index].ToShortDateString(), supervisionData.hours[index]);
}
Notice on this image that the Date 11/19/2021 occurs twice, but with 11/20/2021 occurring in-between and only yellow dataset actually has data that goes to the 23rd (data should show from 10/29-11/23). The blue line should span 10/31-11/20 and the red line should span from 10/26-11/19.
I am trying to build a custom machine learning library in C# , I have researched fairly about the topic. My first example (XOR estimator) was a success, I was able to lower the average loss to almost cero. Then I tried to build a model to classify handwritten digits (using MNIST text database).The problem is ,no matter how I configure the model, I always get stuck on a certain average loss over the data set.Second problem, because the MNIST dataset is very large, the model takes a lot of time to compute, maybe I can use some advice on how to carry on the slowest parts of the algorithm (I am using stochastic gradient descent).I am going to show the main method that performs most of the work.
I have tried using MSE and CrossEntropy loss functions, also tanh , sigmoid , reLu and softPlus activation functions. The model I am trying to build is a 4 layer one. First layer, 784 input neurons ; Second , 16 neurons,sigmoid ; Third , 16 neurons , sigmoid and Output layer, 10 neurons (one hot encoded digits) with sigmoid. I am aware that the code below may not be a minimal reproducible example, but it represents the algorithm I am trying to figure. I have also uploaded the solution to GitHub, maybe somebody can give me a hand figuring the problem. This is the link https://github.com/juan-carvajal/MachineLearningFramework
Running the Main method of the app will first, execute the XOR classifier, that runs good. Then the MNIST classifier.
The model is best represented here:
DataSet dataSet = new DataSet("mnist2.txt", ' ', 10, false);
//This creates a model with batching=128 , learningRate=0.5 and
//CrossEntropy loss function
var p = new Perceptron(128, 0.5, ErrorFunction.CrossEntropy())
.Layer(784, ActivationFunction.Sigmoid())
.Layer(16, ActivationFunction.Sigmoid())
.Layer(16, ActivationFunction.Sigmoid())
.Layer(10, ActivationFunction.Sigmoid());
//1000 is the number of epochs
p.Train2(dataSet, 1000);
Actual Algorithm (stochastic gradiente descent):
Console.WriteLine("Initial Loss:"+ CalculateMeanErrorOverDataSet(dataSet));
for (int i = 0; i < epochs; i++)
{
//Shuffle the data in every step
dataSet.Shuffle();
List<DataRow> batch = dataSet.NextBatch(this.Batching);
//Gets random batch from the dataSet
int count = 0;
foreach (DataRow example in batch)
{
count++;
double[] result = this.FeedForward(example.GetFeatures());
double[] labels = example.GetLabels();
if (result.Length != labels.Length)
{
throw new Exception("Inconsistent array size, Incorrect implementation.");
}
else
{
//What follows is the calculation of the gradient for this example, every example affects the current gradient, then all those changes are averaged an every parameter is updated.
double error = CalculateExampleLost(example);
for (int l = this.Layers.Count - 1; l > 0; l--)
{
if (l == this.Layers.Count - 1)
{
for (int j = 0; j < this.Layers[l].CostDerivatives.Length; j++)
{
this.Layers[l].CostDerivatives[j] = ErrorFunction.GetDerivativeValue(labels[j], this.Layers[l].Activations[j]);
}
}
else
{
for (int j = 0; j < this.Layers[l].CostDerivatives.Length; j++)
{
double acum = 0;
for (int j2 = 0; j2 < Layers[l + 1].Size; j2++)
{
acum += Layers[l + 1].WeightMatrix[j2, j] * this.Layers[l+1].ActivationFunction.GetDerivativeValue(Layers[l + 1].WeightedSum[j2]) * Layers[l + 1].CostDerivatives[j2];
}
this.Layers[l].CostDerivatives[j] = acum;
}
}
for (int j = 0; j < this.Layers[l].Activations.Length; j++)
{
this.Layers[l].BiasVectorChangeRecord[j] += this.Layers[l].ActivationFunction.GetDerivativeValue(Layers[l].WeightedSum[j]) * Layers[l].CostDerivatives[j];
for (int k = 0; k < Layers[l].WeightMatrix.GetLength(1); k++)
{
this.Layers[l].WeightMatrixChangeRecord[j, k] += Layers[l - 1].Activations[k]
* this.Layers[l].ActivationFunction.GetDerivativeValue(Layers[l].WeightedSum[j])
* Layers[l].CostDerivatives[j];
}
}
}
}
}
TakeGradientDescentStep(batch.Count);
if ((i + 1) % (epochs / 10) == 0)
{
Console.WriteLine("Epoch " + (i + 1) + ", Avg.Loss:" + CalculateMeanErrorOverDataSet(dataSet));
}
}
This is an example of what the local minima looks like in the current model.
In my research I found out that similar models may archieve accuracy up to 90%. My model barely got 10%.
I am trying to make a back propagation neural network.
Based upon the the tutorials i found here : MSDN article by James McCaffrey. He gives many examples but all his networks are based upon the same problem to solve. So his networks look like 4:7:3 >> 4input - 7hidden - 3output.
His output is always binary 0 or 1, one output gets a 1, to classify an Irish flower, into one of the three categories.
I would like to solve another problem with a neural network and that would require me 2 neural networks where one needs an output inbetween 0..255 and another inbetween 0 and 2times Pi. (a full turn, circle). Well essentially i think i need an output that range from 0.0 to 1.0 or from -1 to 1 and anything in between, so that i can multiply it to becomme 0..255 or 0..2Pi
I think his network does behave, like it does because of his computeOutputs
Which I show below here :
private double[] ComputeOutputs(double[] xValues)
{
if (xValues.Length != numInput)
throw new Exception("Bad xValues array length");
double[] hSums = new double[numHidden]; // hidden nodes sums scratch array
double[] oSums = new double[numOutput]; // output nodes sums
for (int i = 0; i < xValues.Length; ++i) // copy x-values to inputs
this.inputs[i] = xValues[i];
for (int j = 0; j < numHidden; ++j) // compute i-h sum of weights * inputs
for (int i = 0; i < numInput; ++i)
hSums[j] += this.inputs[i] * this.ihWeights[i][j]; // note +=
for (int i = 0; i < numHidden; ++i) // add biases to input-to-hidden sums
hSums[i] += this.hBiases[i];
for (int i = 0; i < numHidden; ++i) // apply activation
this.hOutputs[i] = HyperTanFunction(hSums[i]); // hard-coded
for (int j = 0; j < numOutput; ++j) // compute h-o sum of weights * hOutputs
for (int i = 0; i < numHidden; ++i)
oSums[j] += hOutputs[i] * hoWeights[i][j];
for (int i = 0; i < numOutput; ++i) // add biases to input-to-hidden sums
oSums[i] += oBiases[i];
double[] softOut = Softmax(oSums); // softmax activation does all outputs at once for efficiency
Array.Copy(softOut, outputs, softOut.Length);
double[] retResult = new double[numOutput]; // could define a GetOutputs method instead
Array.Copy(this.outputs, retResult, retResult.Length);
return retResult;
The network uses the folowing hyperTan function
private static double HyperTanFunction(double x)
{
if (x < -20.0) return -1.0; // approximation is correct to 30 decimals
else if (x > 20.0) return 1.0;
else return Math.Tanh(x);
}
In above a function makes for the output layer use of Softmax() and it is i think critical to problem here. In that I think it makes his output all binary, and it looks like this :
private static double[] Softmax(double[] oSums)
{
// determine max output sum
// does all output nodes at once so scale doesn't have to be re-computed each time
double max = oSums[0];
for (int i = 0; i < oSums.Length; ++i)
if (oSums[i] > max) max = oSums[i];
// determine scaling factor -- sum of exp(each val - max)
double scale = 0.0;
for (int i = 0; i < oSums.Length; ++i)
scale += Math.Exp(oSums[i] - max);
double[] result = new double[oSums.Length];
for (int i = 0; i < oSums.Length; ++i)
result[i] = Math.Exp(oSums[i] - max) / scale;
return result; // now scaled so that xi sum to 1.0
}
How to rewrite softmax ?
So the network will be able to give non binary answers ?
Notice the full code of the network is here. if you would like to try it out.
Also as to test the network the following accuracy function is used, maybe the binary behaviour emerges from it
public double Accuracy(double[][] testData)
{
// percentage correct using winner-takes all
int numCorrect = 0;
int numWrong = 0;
double[] xValues = new double[numInput]; // inputs
double[] tValues = new double[numOutput]; // targets
double[] yValues; // computed Y
for (int i = 0; i < testData.Length; ++i)
{
Array.Copy(testData[i], xValues, numInput); // parse test data into x-values and t-values
Array.Copy(testData[i], numInput, tValues, 0, numOutput);
yValues = this.ComputeOutputs(xValues);
int maxIndex = MaxIndex(yValues); // which cell in yValues has largest value?
int tMaxIndex = MaxIndex(tValues);
if (maxIndex == tMaxIndex)
++numCorrect;
else
++numWrong;
}
return (numCorrect * 1.0) / (double)testData.Length;
}
Just in case that someone gets into the same situation.
If you need some example code of a neural network regression
(a NNR) That's how they are called.
Here is link to sample code in C#, and here is a good article about it. Notice the guy writes more articles there, you wont find everything but there's a lot there. Despite I was following this man for a while I missed this specific article as I didn't know how they where called, when I asked the question here on stack overflow.
I'm a bit rusty at Neural Netowrks but I think, if you want to have a range of values from your output then you need to make sure your activation functions on your output layer are linear (or something that has a similar effect).
Try adding this method:
private static double[] Linear(double[] oSums)
{
double sum = oSums.Sum(d => Math.Abs(d));
double[] result = new double[oSums.Length];
for (int i = 0; i < oSums.Length; ++i)
result[i] = Math.Abs(oSums[i]) / sum;
// scaled so that xi sum to 1.0
return result;
}
And then in the ComputeOutputs method you need to use this new activation function for the output (rather than Softmax):
...
//double[] softOut = Softmax(oSums); // all outputs at once for efficiency
double[] softOut = Linear(oSums); // all outputs at once for efficiency
Array.Copy(softOut, outputs, softOut.Length);
...
This should now output linear values.
I have 2 Emgu.CV.Image images:
Image<Gray, byte> img1 = new Image<Gray, byte>(#"xyz.gif");
Image<Gray, byte> img2 = new Image<Gray, byte>(#"abc.gif");
I want to perform operation on images like image addition pixel by pixel like ( without using inbuilt functions):
for (int i = 0; i < width1; i++){
for (int j = 0; j < height1; j++){
img2[i][j] = img1[i][j] + img2[i][j];
}
}
How can I do so?
If you need to alter the image, Pixel by Pixel then it's back to using the Image.Data property.
If you are using colour images it is important to note that it is a 3 Dimensional array containing Red, Green and Blue Data in layer 0,1,2 respectively. The following code will allow you to access data from the image and adjust it.
for (int i = 0; i < width1; i++)
{
for (int j = 0; j < height1; j++)
{
img1.Data[i,j,0] = img1.Data[i,j,0] + img2.Data[i,j,0];
img1.Data[i,j,1] = img1.Data[i,j,1] + img2.Data[i,j,1];
img1.Data[i,j,2] = img1.Data[i,j,2] + img2.Data[i,j,2];
}
}
For the int<>Byte conversion error, you might need to cast the results to a (byte) (however you can assign int's to the Data providing there in range) i.e.
img1.Data[i,j,0] = (byte)img1.Data[i,j,0] + img2.Data[i,j,0];
this is done to tell .Net that you willing to accept data loss. Please note however you are adding two bytes from image data. There values are 0-255 so you could end up with a value of 0-510. To account for this you must normalise you results to the required 0 - 255 standard in images. i.e.
img1.Data[i,j,0] = ((img1.Data[i,j,0] + img2.Data[i,j,2])/0);
As you are using greyscale images you image data will only have 1 layer the code is similar in format however you only add the first layer 0.
for (int i = 0; i < width1; i++)
{
for (int j = 0; j < height1; j++)
{
gray_image1.Data[i,j,0] = gray_image1.Data[i,j,0] + gray_image2.Data[i,j,0];
}
}
The TDepth property allows you to alter the type of data held with .Data construct while some conversion are not supported doubles are, mainly because the way these are stored allow code to execute more efficiently with them. It is good practice to use this however not really essential.
I am new to C#.I have been thinking of adding a ButtonControlArray where i can store each button control.Here is part of my code.I am creating a 6*6 array of button Control.
ButtonControl buttonControl;
ButtonControl[,] arrayButtons = new ButtonControl[6,6];
public void createGrid()
{
l = 0;
for (int i = 0; i < numberOfButtons; i++)
{
for (int k = 0; k < numberOfButtons; k++)
{
buttonControl = new ButtonControl();
buttonControl.Location = new Point(l,j);
j += 55;
arrayButtons[i, k] = buttonControl;
//After the above statement if i print Console.WriteLine(""+arrayButtons[i,k]); i am getting only my projectname.buttoncontrol
myGridControl.Controls.Add(buttonControl);
}
l += 55; j = 10;
}
}
I want to access each variable in arrayButtons[][]..like in a 3*3 matrix..if i want 2nd row 1 column element..then i get something like arrayname[2][1]..same way if i want 2nd button in 2nd row how can i get..i tried doing one way but i couldnt figure it out...Can you help me out with this..
What are you having difficulty with?
If you're running into bounds checking problems, you should know that C# arrays start at zero, not one. So the button in the second row, first column is a[1,0] not a[2,1]
If you Google Control Arrays in C# you will get several good matches, including this one.