How to perform Change Point Analysis using R.NET - c#

How to perform Change Point Analysis using R.NET. I am using below code
REngine.SetEnvironmentVariables();
REngine engine = REngine.GetInstance();
double[] data = new double[] { 1, 2, 3, 4, 5, 6 };
NumericVector vector = engine.CreateNumericVector(data);
engine.SetSymbol("mydatapoints", vector);
engine.Evaluate("library(changepoint)");
engine.Evaluate("chpoints = cpt.mean(mydatapoints, method="BinSeg")");
DynamicVector result = engine.Evaluate("x<-cpts(chpoints)").AsVector(); ;
engine.Dispose();
I am receiving below error on engine.Evaluate("library(changepoint)");
Error in library(changepoint) : there is no package called
'changepoint'
Edit # 1
The changepoint package is supposed to be installed explicitly, it is not there by default. Installed it using RGui -> Packages -> Load package.
Now the error has been changed to
Status Error for chpoints = cpt.mean(mydatapoints, method=”BinSeg”) :
unexpected input
Edit # 2
After fixing first two errors, the following one appears on second Evaluate statement.
Error in BINSEG(sumstat, pen = pen.value, cost_func = costfunc,
minseglen = minseglen, : Q is larger than the maximum number of
segments 4
The same error appears on R as well using these commands
value.ts <- c(29.89, 29.93, 29.72, 29.98)
chpoints = cpt.mean(value.ts, method="BinSeg")

The error is not in your calling code but rather in your use of R (as you apparently now realize.) So the labeling of this as something to do with rdotnet or c-sharp seems misleading:
mydatapoints <- c(1, 2, 3, 4, 5, 6 )
library(changepoint);
chpoints = cpt.mean(mydatapoints, method="BinSeg");
#Error in BINSEG(sumstat, pen = pen.value, cost_func = costfunc, minseglen = minseglen, :
# Q is larger than the maximum number of segments 4
I'm not sure what you intended. Change-point analysis generally requires paired datapoints ... x-y and all that jazz. And giving R regression functions perfectly linear data is also unwise. It often causes non-invertible matrices.
I suggest you search with https://stackoverflow.com/search?q=%5Br%5D+changepoint to find a simple bit of code to build into your REngine calling scheme.

The data points are supposed to be converted in Time Series.
REngine.SetEnvironmentVariables();
REngine engine = REngine.GetInstance();
double[] data = new double[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 };
NumericVector vector = engine.CreateNumericVector(data);
engine.Evaluate("library(changepoint)");
engine.SetSymbol("values", vector);
engine.Evaluate("values.ts = ts(values, frequency = 12, start = c(2017, 1))");
engine.Evaluate("chpoints = cpt.mean(values.ts, method=\"BinSeg\")");
var result = engine.GetSymbol("chpoints");
engine.Dispose();
Now looking for how to get the results back in C#, chpoints or result of plot(chpoints)

Related

Xamarin Forms Can't Handle Exception [duplicate]

I'm getting the following error java.lang.RuntimeException: setDataSource failed: status = 0xFFFFFFEA and I'd like to know what this status is. I'm using the function MediaMetaDataRetriever.setDataSource(String filePath)
I got this error java.lang.RuntimeException: setDataSource failed: status = 0xFFFFFFEA when tried to call void setDataSource(String path) on an empty file. (0 bytes)
You need to be 100% sure that path for the file is not null, not empty, the file itself exists and valid.
I didn't have an empty file or any other of the here mentioned bugs in my code. The files I tried to use were fine. I don't exactly know why, but it worked for me when I simply used another overload of setDataSource.
The ones I used that threw this exception were MediaMetadataRetriever.setDataSource(String) and MediaMetadataRetriever.setDataSource(String, HashMap)
The one that simply worked was MediaMetadataRetriever.setDataSource(Context, URI).
It was very well buried but I found the source. Here is a link to the error codes
It's a build from ICS and I'm not sure where it is in the current build.
My error was a not supported error when I used a midi file.
Source:
/*
* Copyright (C) 2009 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef MEDIA_ERRORS_H_
#define MEDIA_ERRORS_H_
#include <utils/Errors.h>
namespace android {
enum {
MEDIA_ERROR_BASE = -1000,
ERROR_ALREADY_CONNECTED = MEDIA_ERROR_BASE,
ERROR_NOT_CONNECTED = MEDIA_ERROR_BASE - 1,
ERROR_UNKNOWN_HOST = MEDIA_ERROR_BASE - 2,
ERROR_CANNOT_CONNECT = MEDIA_ERROR_BASE - 3,
ERROR_IO = MEDIA_ERROR_BASE - 4,
ERROR_CONNECTION_LOST = MEDIA_ERROR_BASE - 5,
ERROR_MALFORMED = MEDIA_ERROR_BASE - 7,
ERROR_OUT_OF_RANGE = MEDIA_ERROR_BASE - 8,
ERROR_BUFFER_TOO_SMALL = MEDIA_ERROR_BASE - 9,
ERROR_UNSUPPORTED = MEDIA_ERROR_BASE - 10,
ERROR_END_OF_STREAM = MEDIA_ERROR_BASE - 11,
// Not technically an error.
INFO_FORMAT_CHANGED = MEDIA_ERROR_BASE - 12,
INFO_DISCONTINUITY = MEDIA_ERROR_BASE - 13,
// The following constant values should be in sync with
// drm/drm_framework_common.h
DRM_ERROR_BASE = -2000,
ERROR_DRM_UNKNOWN = DRM_ERROR_BASE,
ERROR_DRM_NO_LICENSE = DRM_ERROR_BASE - 1,
ERROR_DRM_LICENSE_EXPIRED = DRM_ERROR_BASE - 2,
ERROR_DRM_SESSION_NOT_OPENED = DRM_ERROR_BASE - 3,
ERROR_DRM_DECRYPT_UNIT_NOT_INITIALIZED = DRM_ERROR_BASE - 4,
ERROR_DRM_DECRYPT = DRM_ERROR_BASE - 5,
ERROR_DRM_CANNOT_HANDLE = DRM_ERROR_BASE - 6,
ERROR_DRM_TAMPER_DETECTED = DRM_ERROR_BASE - 7,
// Heartbeat Error Codes
HEARTBEAT_ERROR_BASE = -3000,
ERROR_HEARTBEAT_AUTHENTICATION_FAILURE = HEARTBEAT_ERROR_BASE,
ERROR_HEARTBEAT_NO_ACTIVE_PURCHASE_AGREEMENT = HEARTBEAT_ERROR_BASE - 1,
ERROR_HEARTBEAT_CONCURRENT_PLAYBACK = HEARTBEAT_ERROR_BASE - 2,
ERROR_HEARTBEAT_UNUSUAL_ACTIVITY = HEARTBEAT_ERROR_BASE - 3,
ERROR_HEARTBEAT_STREAMING_UNAVAILABLE = HEARTBEAT_ERROR_BASE - 4,
ERROR_HEARTBEAT_CANNOT_ACTIVATE_RENTAL = HEARTBEAT_ERROR_BASE - 5,
ERROR_HEARTBEAT_TERMINATE_REQUESTED = HEARTBEAT_ERROR_BASE - 6,
};
} // namespace android
#endif // MEDIA_ERRORS_H_
I got this error: setDataSource failed: status = 0xFFFFFFEA because the file does not exist on my device.
Just adding this link to know more about error codes: https://kdave.github.io/errno.h/
By searching for FFEA, it tells that 0xFFFFFFEA is decimal value 22 as #AndrewOrobator said, the error is EINVAL invalid argument.
I'm catch this error when my audio file was brocken, it solved when I change file, you can try manually set different sources paths
I met this question when I try to use a 0 kb mp3 file,I solved it after I deleted that file.maybe you can catch this exception.

Configuration of network incorrect

I'm a novice to Keras and Tensorflow. I am unsuccessfully trying to reshape this tutorial for Python (which I'm not familiar with at all); I have formulated the following code fragment.
var Functions = new int[] { 1, 2, 3, 4 };
var BatchSize = 64;
var InputDim = Functions.Count();
var OutputDim = 256;
var RnnUnits = 1024;
var iLayer1
= new Embedding(InputDim,
OutputDim,
input_shape: new Shape(new int[] { BatchSize, 0 } ) );
var iLayer2
= new GRU(RnnUnits,
return_sequences: true,
stateful: true, recurrent_initializer: "glorot_uniform");
var iLayer3 = new Dense(InputDim);
var iSequential = new Sequential();
iSequential.Add(iLayer1);
iSequential.Add(iLayer2);
iSequential.Add(iLayer3);
While this compiles, I'm getting the error message
Python.Runtime.PythonException:
"ValueError : Input 0 is incompatible with layer gru_1: expected ndim=3, found ndim=4"
when
iSequential.Add(iLayer2);
is executed. To my superficial understanding, this means that iLayer1 is configured in a way that makes it impossible to operate it together with iLayer2, but I have no idea what to do.
Edit: After some messing around, I got the error message
ValueError : slice index 0 of dimension 0 out of
bounds. for 'gru_1/strided_slice_10' (op: 'StridedSlice') with
input shapes: [0,64,256], [1], [1], [1] and with
computed input tensors: input[1] = <0>, input[2] = <1>, input[3] = <1>.
Any ideas?
If C# Keras uses the same convetions as Python Keras, your input shape for the embedding should not include the batch size.
Since you are forced to use the batch size due to stateful: true, you need to use the batch_input_shape argument istead of input_shape.
I'm not sure about 0 there. Is this the C# convention for variable length?
The error is saying that the second layer got a 4D tensor from the previous layer, while that tensor should have been 3D.
Options:
batch_input_shape: new Shape(new int[] { BatchSize, 0 } )
batch_shape: new Shape(new int[] { BatchSize, 0 } )
input_shape: new Shape(new int[] { 0 } ), batch_size: BatchSize
If none of these work on C#, you will have to try the functional API model instead of the sequential model.

UndistortPoints in Android OpenCV (C#) and properly forming the input matricies

In an attempt to speed up some processing in a real time application of computer vision I'm developing for Android platforms, I'd like to undistort some key points, instead of entire frames. I've followed the documentation to the best of my abilities, but I'm receiving the following error:
OpenCV Error: Assertion failed (CV_IS_MAT(_cameraMatrix) && _cameraMatrix->rows == 3 && _cameraMatrix->cols == 3) in void cvUndistortPoints(const CvMat*, CvMat*, const CvMat*, const CvMat*, const CvMat*, const CvMat*), file /Volumes/Linux/builds/master_pack-android/opencv/modules/imgproc/src/undistort.cpp, line 301
The appropriate documentation is here: http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistortpoints
In my code I define and populate the matrix elements individually since they're small. I was unable to find a way to populate MatofPoint2f gracefully, but searching led to me to convert from a list as you'll see below.
Mat camMtx = new Mat(1, 5, CvType.Cv64fc1, new Scalar(0));
Mat distCoef = new Mat(3, 3, CvType.Cv64fc1, new Scalar(0));
MatOfPoint2f uncalibpoints = new MatOfPoint2f();
MatOfPoint2f calibpoints = new MatOfPoint2f();
List<Point> points = new List<Point>();
points.Add(center); //Some previously stored point
points.Add(apex1); //Some previously stored point
points.Add(apex2); //Some previously stored point
points.Add(apex3); //Some previously stored point
uncalibpoints.FromList(points); //Convert list of points to MatofPoint2f
Console.WriteLine(uncalibpoints.Channels());
Console.WriteLine(uncalibpoints.Size());
Console.WriteLine(uncalibpoints.GetType());
//Manually setting the matrix values
distCoef.Put(0, 0, 0.51165764);
distCoef.Put(0, 1, -1.96134156);
distCoef.Put(0, 2, 0.00600294);
distCoef.Put(0, 3, 0.00643735);
distCoef.Put(0, 4, 2.59503145);
camMtx.Put(0, 0, 1551.700);
camMtx.Put(0, 1, 0.0);
camMtx.Put(0, 2, 962.237163);
camMtx.Put(1, 0, 0.0);
camMtx.Put(1, 1, 1536.170);
camMtx.Put(1, 2, 589.418432);
camMtx.Put(2, 0, 0.0);
camMtx.Put(2, 1, 0.0);
camMtx.Put(2, 2, 1.0);
Imgproc.UndistortPoints(uncalibpoints, calibpoints, camMtx, distCoef);`
Two issues with the above code:
Simple error in the allocation of the camMtx and distCoeff (reversed their allocation in a copy paste type error)
The Undistort Points call should look like this:
Imgproc.UndistortPoints(uncalibpoints, calibpoints, camMtx, distCoef, new Mat(), camMtx);

C# HMM Gesture Recognition usng Kinect

I'm working on a solution to do gesture recognition using the Kinect sensor.
Now I'm using Accord .NET to train the HMM.
I have a dataset with saved gestures. This dataset has 11 gestures and each one has 32 frames with 18 points saved.
So I have a (double [12] [32,18]) input dataset and a (int[12]) output dataset, but when i do:
double error = teacher.Run(inputSequences, output), it gives me this : "Specified argument was out of the range of valid values."
Does anyone knows how to solve this? Should treat the dataset before using it o on the hmm teacher or the dataset is ok like this?
I have used Accord.NET in the past and its really one of the best implementations for a HMM engine. However, when I trained my HMM, I passed the HMM parameters (namely PI, A and B) to the Baum Welch Teacher with the input data set supplied using an organized excel sheet. (similar to what Accord's author himself has used in his project). I somehow feel that since you are storing your data set as a multi-dimensional array and directly supplying it to the teacher, its unable to process it properly. Maybe you could supply one gesture record at a time or change the storage structure of your data set altogether. I advice going through the entire example of Accord if you haven't already because it worked just fine for me.
The problem might have been that the teaching algorithm expects the training sequences to be in the form double[12][32][18], rather than double[12][32,18]. The training data should be a collection of sequences of multivariate points. It should also be necessary to note that, if you have 11 possible classes of gestures, the integer labels given in the int[12] array should be comprised of values between 0 and 10 only.
Thus if you have 12 gesture samples, each containing 32 frames, and each frame is a vector of 18 points, you should be feeding the teacher with a double[12][32][18] array containing the observations and a int[12] array containing the expected class labels.
The example below, extracted from the HiddenMarkovClassifierLearning documentation page should help to give an idea how the vectors should be organized!
// Create a Continuous density Hidden Markov Model Sequence Classifier
// to detect a multivariate sequence and the same sequence backwards.
double[][][] sequences = new double[][][]
{
new double[][]
{
// This is the first sequence with label = 0
new double[] { 0, 1 },
new double[] { 1, 2 },
new double[] { 2, 3 },
new double[] { 3, 4 },
new double[] { 4, 5 },
},
new double[][]
{
// This is the second sequence with label = 1
new double[] { 4, 3 },
new double[] { 3, 2 },
new double[] { 2, 1 },
new double[] { 1, 0 },
new double[] { 0, -1 },
}
};
// Labels for the sequences
int[] labels = { 0, 1 };
In the above code, we have set the problem for 2 sequences of observations, where each sequence containing 5 observations, and in which each observations is comprised of 2 values. As you can see, this is a double[2][5][2] array. The array of class labels is given by a int[2], containing only values ranging from 0 to 1.
Now, to make the example more complete, we can continue creating and training the model using the following code:
var initialDensity = new MultivariateNormalDistribution(2);
// Creates a sequence classifier containing 2 hidden Markov Models with 2 states
// and an underlying multivariate mixture of Normal distributions as density.
var classifier = new HiddenMarkovClassifier<MultivariateNormalDistribution>(
classes: 2, topology: new Forward(2), initial: initialDensity);
// Configure the learning algorithms to train the sequence classifier
var teacher = new HiddenMarkovClassifierLearning<MultivariateNormalDistribution>(
classifier,
// Train each model until the log-likelihood changes less than 0.0001
modelIndex => new BaumWelchLearning<MultivariateNormalDistribution>(
classifier.Models[modelIndex])
{
Tolerance = 0.0001,
Iterations = 0,
FittingOptions = new NormalOptions()
{
Diagonal = true, // only diagonal covariance matrices
Regularization = 1e-5 // avoid non-positive definite errors
}
}
);
// Train the sequence classifier using the algorithm
double logLikelihood = teacher.Run(sequences, labels);
And now we can test the model, asserting that the output class label indeed matches what we are expecting:
// Calculate the probability that the given
// sequences originated from the model
double likelihood, likelihood2;
// Try to classify the 1st sequence (output should be 0)
int c1 = classifier.Compute(sequences[0], out likelihood);
// Try to classify the 2nd sequence (output should be 1)
int c2 = classifier.Compute(sequences[1], out likelihood2);

C# Vectorized Array Addition

Is there anyway to "vectorize" the addition of elements across arrays in a SIMD fashion?
For example, I would like to turn:
var a = new[] { 1, 2, 3, 4 };
var b = new[] { 1, 2, 3, 4 };
var c = new[] { 1, 2, 3, 4 };
var d = new[] { 1, 2, 3, 4 };
var e = new int[4];
for (int i = 0; i < a.Length; i++)
{
e[i] = a[i] + b[i] + c[i] + d[i];
}
// e should equal { 4, 8, 12, 16 }
Into something like:
var e = VectorAdd(a,b,c,d);
I know something may exist in the C++ / XNA libraries, but I didn't know if we have it in the standard .Net libraries.
Thanks!
You will want to look at Mono.Simd:
http://tirania.org/blog/archive/2008/Nov-03.html
It supports SIMD in C#
using Mono.Simd;
//...
var a = new Vector4f( 1, 2, 3, 4 );
var b = new Vector4f( 1, 2, 3, 4 );
var c = new Vector4f( 1, 2, 3, 4 );
var d = new Vector4f( 1, 2, 3, 4 );
var e = a+b+c+d;
Mono provides a relatively decent SIMD API (as sehe mentions) but if Mono isn't an option I would probably write a C++/CLI interface library to do the heavy lifting. C# works pretty well for most problem sets but if you start getting into high performance code it's best to go to a language that gives you the control to really get dirty with performance.
Here at work we use P/Invoke to call image processing routines written in C++ from C#. P/Invoke has some overhead but if you make very few calls and do a lot of processing on the native side it can be worth it.
I guess it all depends on what you are doing, but if you are worried about vectorizing vector sums, you might want to take a look at a library such as Math.NET which provide optimized numerical computations.
From their website:
It targets Microsoft .Net 4.0, Mono and Silverlight 4, and in addition to a purely managed implementation will also support native hardware optimization (MKL, ATLAS).

Categories