Import addition function from F# to C# - c#

I recently learned that you import functions from the F# core library to C# such as
using Microsoft.FSharp.Core
...
var max = new List<int>() { 1, 2, 3 }.Aggregate(int.MinValue, Operators.Max);
Is there an equivalent function available for addition? I can't seem to find one. It would be cool to be able to write
var sum = new List<int>() { 1, 2, 3 }.Aggregate(0, Operators.Add);

There is an addition operator in Operators: Operators.( + )
You can consume it from C# using this:
var sum = new List<int>() { 1, 2, 3 }.Aggregate(0, Operators.op_Addition<int, int, int>);

Related

How the first array entry differs from the second

I can not understand the difference between the declaration with array initialization in the first case and the second
int[] array = new int[3] { 1, 2, 3 };
int[] secondArray = { 1, 2, 3 };
They seem to do the same thing, maybe they work differently?
The is no difference in the result between the two show lines shown:
int[] array = new int[3] { 1, 2, 3 };
int[] secondArray = { 1, 2, 3 };
However, there are practical differences between new int[n] {...} syntax and {...}:
Implicit type is not available for the alternative array initialiser:
var a1 = new int[3] { 1, 2, 3 }; // OK
var a2 = { 1, 2, 3 }; // Error: Cannot initialize an implicitly-typed variable with an array initializer
// BTW. You can omit the size
var a3 = new int[] { 1, 2, 3 }; // OK
With the alternative syntax you cannot specify the size, it's always inferred.
var a1 = new int[100]; // Array with 100 elements (all 0)
int[] a2 = { }; // Array with no elements
There is no difference in the compiled code between the two lines.
The second one is just a shortcut. Both statements have the same result. The shorter variant just wasn't available in early versions of C#.
The first one uses 3 as a array size explictly, the 2nd one size is inferred.
This might be work if you dont want to initialize the values.
There is no difference between this two array initialization syntaxes in terms how they will be translated by the compiler into IL (you can play with it at sharplab.io) and it is the same as the following one:
int[] thirdArray = new int[] { 1, 2, 3 };
The only difference comes when you are using those with already declared variable, i.e. you can use 1st and 3rd to assign new value to existing array variable but not the second one:
int[] arr;
arr = new int[3] { 1, 2, 3 }; // works
// arr = { 1, 2, 3 }; // won't compile
arr = new int[] { 1, 2, 3 }; // works

How to perform Change Point Analysis using R.NET

How to perform Change Point Analysis using R.NET. I am using below code
REngine.SetEnvironmentVariables();
REngine engine = REngine.GetInstance();
double[] data = new double[] { 1, 2, 3, 4, 5, 6 };
NumericVector vector = engine.CreateNumericVector(data);
engine.SetSymbol("mydatapoints", vector);
engine.Evaluate("library(changepoint)");
engine.Evaluate("chpoints = cpt.mean(mydatapoints, method="BinSeg")");
DynamicVector result = engine.Evaluate("x<-cpts(chpoints)").AsVector(); ;
engine.Dispose();
I am receiving below error on engine.Evaluate("library(changepoint)");
Error in library(changepoint) : there is no package called
'changepoint'
Edit # 1
The changepoint package is supposed to be installed explicitly, it is not there by default. Installed it using RGui -> Packages -> Load package.
Now the error has been changed to
Status Error for chpoints = cpt.mean(mydatapoints, method=”BinSeg”) :
unexpected input
Edit # 2
After fixing first two errors, the following one appears on second Evaluate statement.
Error in BINSEG(sumstat, pen = pen.value, cost_func = costfunc,
minseglen = minseglen, : Q is larger than the maximum number of
segments 4
The same error appears on R as well using these commands
value.ts <- c(29.89, 29.93, 29.72, 29.98)
chpoints = cpt.mean(value.ts, method="BinSeg")
The error is not in your calling code but rather in your use of R (as you apparently now realize.) So the labeling of this as something to do with rdotnet or c-sharp seems misleading:
mydatapoints <- c(1, 2, 3, 4, 5, 6 )
library(changepoint);
chpoints = cpt.mean(mydatapoints, method="BinSeg");
#Error in BINSEG(sumstat, pen = pen.value, cost_func = costfunc, minseglen = minseglen, :
# Q is larger than the maximum number of segments 4
I'm not sure what you intended. Change-point analysis generally requires paired datapoints ... x-y and all that jazz. And giving R regression functions perfectly linear data is also unwise. It often causes non-invertible matrices.
I suggest you search with https://stackoverflow.com/search?q=%5Br%5D+changepoint to find a simple bit of code to build into your REngine calling scheme.
The data points are supposed to be converted in Time Series.
REngine.SetEnvironmentVariables();
REngine engine = REngine.GetInstance();
double[] data = new double[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 };
NumericVector vector = engine.CreateNumericVector(data);
engine.Evaluate("library(changepoint)");
engine.SetSymbol("values", vector);
engine.Evaluate("values.ts = ts(values, frequency = 12, start = c(2017, 1))");
engine.Evaluate("chpoints = cpt.mean(values.ts, method=\"BinSeg\")");
var result = engine.GetSymbol("chpoints");
engine.Dispose();
Now looking for how to get the results back in C#, chpoints or result of plot(chpoints)

How can I make my procedure for finding the Nth most frequent element in an array more efficient and compact?

Here's an example of a solution I came up with
using System;
using System.Linq;
using System.Collections.Generic;
public class Program
{
public static void Main()
{
int[] arr = new int[] { 1, 2, 2, 3, 3, 3, 4, 4, 4, 4 };
var countlist = arr.Aggregate(new Dictionary<int,int>(), (D,i) => {
D[i] = D.ContainsKey(i) ? (D[i] + 1) : 1;
return D;
})
.AsQueryable()
.OrderByDescending(x => x.Value)
.Select(x => x.Key)
.ToList();
// print the element which appears with the second
// highest frequency in arr
Console.WriteLine(countlist[2]); // should print 3
}
}
At the very least, I would like to figure out how to
Cut down the query clauses by at least one. While I don't see any redundancy, this is the type of LINQ query where I fret about all the overhead of all the intermediate structures created.
Figure out how to not return an entire list at the end. I just want the 2nd element in the enumerated sequence; I shouldn't need to return the entire list for the purpose of getting a single element out of it.
int[] arr = new int[] { 1, 2, 2, 3, 3, 3, 4, 4, 4, 4 };
var lookup = arr.ToLookup(t => t);
var result = lookup.OrderByDescending(t => t.Count());
Console.WriteLine(result.ElementAt(1).Key);
I would do this.
int[] arr = new int[] { 1, 2, 2, 3, 3, 3, 4, 4, 4, 4 };
int rank =2;
var item = arr.GroupBy(x=>x) // Group them
.OrderByDescending(x=>x.Count()) // Sort based on number of occurrences
.Skip(rank-1) // Traverse to the position
.FirstOrDefault(); // Take the element
if(item!= null)
{
Console.WriteLine(item.Key);
// output - 3
}
I started to answer, saw the above answers and thought I'd compare them instead.
Here is the Fiddle here.
I put a stopwatch on each and took the number of ticks for each one. The results were:
Orignal: 50600
Berkser: 15970
Tommy: 3413
Hari: 1601
user3185569: 1571
It appears #user3185569 has a slightly faster algorithm than Hari and is about 30-40 times quicker than the OP's origanal version. Note is #user3185569 answer above it appears his is faster when scaled.
update: The numbers I posted above were run on my pc. Using .net fiddle to execute produces different results:
Orignal: 46842
Berkser: 44620
Tommy: 11922
Hari: 13095
user3185569: 16491
Putting the Berkser algortihm slightly faster. I'm not entirely clear why this is the case, as I'm targeting the same .net version.
I came up with the the following mash of Linq and a dictionary as what you're looking for is essentialy an ordered dictionary
void Run()
{
int[] arr = new int[] { 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4 };
int[] unique = arr.Distinct().ToArray();
Dictionary<int, int> dictionary = unique.ToDictionary(k => k, v => 0);
for(int i = 0; i < arr.Length; i++)
{
if(dictionary.ContainsKey(arr[i]))
{
dictionary[arr[i]]++;
}
}
List<KeyValuePair<int, int>> solution = dictionary.ToList();
solution.Sort((x,y)=>-1* x.Value.CompareTo(y.Value));
System.Console.WriteLine(solution[2].Key);
}

C# Vectorized Array Addition

Is there anyway to "vectorize" the addition of elements across arrays in a SIMD fashion?
For example, I would like to turn:
var a = new[] { 1, 2, 3, 4 };
var b = new[] { 1, 2, 3, 4 };
var c = new[] { 1, 2, 3, 4 };
var d = new[] { 1, 2, 3, 4 };
var e = new int[4];
for (int i = 0; i < a.Length; i++)
{
e[i] = a[i] + b[i] + c[i] + d[i];
}
// e should equal { 4, 8, 12, 16 }
Into something like:
var e = VectorAdd(a,b,c,d);
I know something may exist in the C++ / XNA libraries, but I didn't know if we have it in the standard .Net libraries.
Thanks!
You will want to look at Mono.Simd:
http://tirania.org/blog/archive/2008/Nov-03.html
It supports SIMD in C#
using Mono.Simd;
//...
var a = new Vector4f( 1, 2, 3, 4 );
var b = new Vector4f( 1, 2, 3, 4 );
var c = new Vector4f( 1, 2, 3, 4 );
var d = new Vector4f( 1, 2, 3, 4 );
var e = a+b+c+d;
Mono provides a relatively decent SIMD API (as sehe mentions) but if Mono isn't an option I would probably write a C++/CLI interface library to do the heavy lifting. C# works pretty well for most problem sets but if you start getting into high performance code it's best to go to a language that gives you the control to really get dirty with performance.
Here at work we use P/Invoke to call image processing routines written in C++ from C#. P/Invoke has some overhead but if you make very few calls and do a lot of processing on the native side it can be worth it.
I guess it all depends on what you are doing, but if you are worried about vectorizing vector sums, you might want to take a look at a library such as Math.NET which provide optimized numerical computations.
From their website:
It targets Microsoft .Net 4.0, Mono and Silverlight 4, and in addition to a purely managed implementation will also support native hardware optimization (MKL, ATLAS).

C# in VS2005: is there set style notation for integers?

For C# in VS2005, can you do something like this:
if number in [1,2..10,12] { ... }
which would check if number is contained in the set defined in the square brackets?
.NET 2.0 (which is what VS 2005 targets) doesn't have the notion of a Set.
.NET 3.5 introduced HashSet<T>, and .NET 4 introduced SortedSet<T>.
There isn't a literal form for them though - although collection initializers provide something slightly similar:
new HashSet<int> { 1, 2, 4, 12 }
Of course, you could just use an array:
int[] values = { 1, 2, 5, 12 };
but the range part of your sample - 2..10 - doesn't exist in any version of C#.
Unfortunately not.
However, you can use the Contains() method of a List<int>:
List<int> numbers = ...
if (numbers.Contains(2)) { ... }
if numbers is an array, you can either initialize a new List<int> with the array values:
int[] numbers = { 1, 2, 3, 4 };
List<int> newList = new List<int>(numbers);
if (newList.Contains(2)) { ... }
or use Array.Exists():
Array.Exists(numbers, delegate(int i) { return i == 2; });
You can "kind of" do what you want using the Enumerable.Range method:
if (Enumerable.Range(2, 8).Concat(new [] { 1, 12 }).Contains(number)) {
....
}
Of course, that's not nearly as readable as what you find in a basic functional language...

Categories