In C#, let's say that I have an IEnumerable<Tuple<int, string>>
It is sorted by it's int element. I want to do O(log n) lookups from it, and sure I can do it like this, using System.Linq:
var lookups = alreadySorted.ToDictionary(x => x.Item1, x => x.Item2);
foreach(var i in someArray) someMethod(lookups.Find(i));
But that has one shortcoming: It has to sort again the already sorted IEnumerable when making the Dictionary because it can't know I have already sorted it. Is there any reasonable way work around that?
EDIT: I've been told Dictionary is not for traditional binary search, but for hashmapping, so this example should be using System.Collections.Generic.SortedSet instead
If your goal is to obtain O(log n) lookups and to avoid any further sorting, you should use ToList() rather than ToDictionary().
You would then use BinarySearch with a custom IComparer that only compares on Item1 of the Tuple.
As a continuation to the answer of Damien_The_Unbeliever, I defined the following method inside an utility class, so I don't have to make new classes for the IComparer:
public static Tuple<int, int> IndexesOf<Et, Ct>(this IList<Et> haystack, Ct needle, Func<Et, Ct, int> func)
{
int minStart = 0;
int maxStart = haystack.Count();
int minEnd = 0;
int maxEnd = haystack.Count();
while(minStart != maxStart)
{
int i = (minStart + maxStart) / 2;
var direction = func(haystack[i], needle);
if (direction < 0)
{
minStart = i+1;
minEnd = i+1 < minEnd? minEnd: i+1;
}
else if (direction == 0)
{
maxStart = i;
minEnd = i+1 < minEnd ? minEnd : i + 1;
}
else
{
maxStart = i;
maxEnd = i;
}
}
while (minEnd != maxEnd)
{
int i = (minEnd + maxEnd) / 2;
var direction = func(haystack[i], needle);
if (direction <= 0) minEnd = i+1;
else maxEnd = i;
}
return Tuple.Create(minStart, minEnd);
}
which is then used like
foreach(var i in someArray)
{ someMethod(lookups[lookups.IndexesOf(i, (a, b) => a.Item1 - b).Item1]);
}
Of course it's somewhat wasteful to look for index range of all matches instead of just index of one match.
How can I write a function that accepts an array of integers and an integer x and returns the index of the integer in the array that is closest to x argument without being larger than x. If all are above x, the function will return negative one. Assuming two or more numbers cannot be the same in array.
assuming that are numbers are unique you could use
public static int GetClosestIndex(int[] arr, int value)
{
var result = arr.Where(x => x < value).OrderByDescending(x => x);
return result.Any() ? Array.IndexOf(arr, result.FirstOrDefault()) : -1;
}
Update: (for Zastai)
here is a better performing approach
public static int GetClosestIndex(int[] arr, int value)
{
int result = -1;
for (int i = 0; i < arr.Length; i++)
{
if (arr[i] < value)
{
if (result == -1 || arr[i] > arr[result])
{
result = i;
}
}
}
return result;
}
I'd go with
public static class StackOverflow {
public static int IndexOfValueClosestTo<T>(this IList<T> list, T x) where T : IComparable<T> {
var closestValue = default(T);
var closestIndex = -1;
var idx = -1;
foreach (var v in list) {
++idx;
if (v.CompareTo(x) < 0 && (closestIndex == -1 || closestValue.CompareTo(v) < 0)) {
closestIndex = idx;
closestValue = v;
}
}
return closestIndex;
}
}
It is still very readable, works on arrays, lists, ... and handles integers, doubles, ...
It's also an extension method, so you can use both StackOverflow.IndexOfValueClosestTo(mylist, myvalue) and mylist.IndexOfValueClosestTo(myvalue).
Some basic benchmarks in LINQPad with an array of a million elements also show this to be 30 times faster than the LINQ-based answer (with 100 iterations this takes ~1.05s where the LINQ version takes ~37s).
For the absolute best performance, however, use a version specialized to int arrays:
public static int IndexOfValueClosestTo(this int[] list, int x) {
var closestValue = 0;
var closestIndex = -1;
var idx = -1;
foreach (var v in list) {
++idx;
if (v < x && (closestIndex == -1 || closestValue < v)) {
closestIndex = idx;
closestValue = v;
}
}
return closestIndex;
}
That runs the 100 iterations in ~0.33s, so it's about 3 times as fast as the generic version above. Note that you can define both as overloads, so you still have a working implementation for, say, a list of doubles.
I need to "Find the minimal positive integer not occurring in a given sequence. "
A[0] = 1
A[1] = 3
A[2] = 6
A[3] = 4
A[4] = 1
A[5] = 2, the function should return 5.
Assume that:
N is an integer within the range [1..100,000];
each element of array A is an integer within the range [−2,147,483,648..2,147,483,647].
I wrote the code in codility, but for many cases it did not worked and the performance test gives 0 %. Please help me out, where I am wrong.
class Solution {
public int solution(int[] A) {
if(A.Length ==0) return -1;
int value = A[0];
int min = A.Min();
int max = A.Max();
for (int j = min+1; j < max; j++)
{
if (!A.Contains(j))
{
value = j;
if(value > 0)
{
break;
}
}
}
if(value > 0)
{
return value;
}
else return 1;
}
}
The codility gives error with all except the example, positive and negative only values.
Edit: Added detail to answer your actual question more directly.
"Please help me out, where I am wrong."
In terms of correctness: Consider A = {7,2,5,6,3}. The correct output, given the contents of A, is 1, but our algorithm would fail to detect this since A.Min() would return 2 and we would start looping from 3 onward. In this case, we would return 4 instead; since it's the next missing value.
Same goes for something like A = {14,15,13}. The minimal missing positive integer here is again 1 and, since all the values from 13-15 are present, the value variable will retain its initial value of value=A[0] which would be 14.
In terms of performance: Consider what A.Min(), A.Max() and A.Contains() are doing behind the scenes; each one of these is looping through A in its entirety and in the case of Contains, we are calling it repeatedly for every value between the Min() and the lowest positive integer we can find. This will take us far beyond the specified O(N) performance that Codility is looking for.
By contrast, here's the simplest version I can think of that should score 100% on Codility. Notice that we only loop through A once and that we take advantage of a Dictionary which lets us use ContainsKey; a much faster method that does not require looping through the whole collection to find a value.
using System;
using System.Collections.Generic;
class Solution {
public int solution(int[] A) {
// the minimum possible answer is 1
int result = 1;
// let's keep track of what we find
Dictionary<int,bool> found = new Dictionary<int,bool>();
// loop through the given array
for(int i=0;i<A.Length;i++) {
// if we have a positive integer that we haven't found before
if(A[i] > 0 && !found.ContainsKey(A[i])) {
// record the fact that we found it
found.Add(A[i], true);
}
}
// crawl through what we found starting at 1
while(found.ContainsKey(result)) {
// look for the next number
result++;
}
// return the smallest positive number that we couldn't find.
return result;
}
}
The simplest solution that scored perfect score was:
public int solution(int[] A)
{
int flag = 1;
A = A.OrderBy(x => x).ToArray();
for (int i = 0; i < A.Length; i++)
{
if (A[i] <= 0)
continue;
else if (A[i] == flag)
{
flag++;
}
}
return flag;
}
Fastest C# solution so far for [-1,000,000...1,000,000].
public int solution(int[] array)
{
HashSet<int> found = new HashSet<int>();
for (int i = 0; i < array.Length; i++)
{
if (array[i] > 0)
{
found.Add(array[i]);
}
}
int result = 1;
while (found.Contains(result))
{
result++;
}
return result;
}
A tiny version of another 100% with C#
using System.Linq;
class Solution
{
public int solution(int[] A)
{
// write your code in C# 6.0 with .NET 4.5 (Mono)
var i = 0;
return A.Where(a => a > 0).Distinct().OrderBy(a => a).Any(a => a != (i = i + 1)) ? i : i + 1;
}
}
A simple solution that scored 100% with C#
int Solution(int[] A)
{
var A2 = Enumerable.Range(1, A.Length + 1);
return A2.Except(A).First();
}
public class Solution {
public int solution( int[] A ) {
return Arrays.stream( A )
.filter( n -> n > 0 )
.sorted()
.reduce( 0, ( a, b ) -> ( ( b - a ) > 1 ) ? a : b ) + 1;
}
}
It seemed easiest to just filter out the negative numbers. Then sort the stream. And then reduce it to come to an answer. It's a bit of a functional approach, but it got a 100/100 test score.
Got an 100% score with this solution:
https://app.codility.com/demo/results/trainingUFKJSB-T8P/
public int MissingInteger(int[] A)
{
A = A.Where(a => a > 0).Distinct().OrderBy(c => c).ToArray();
if (A.Length== 0)
{
return 1;
}
for (int i = 0; i < A.Length; i++)
{
//Console.WriteLine(i + "=>" + A[i]);
if (i + 1 != A[i])
{
return i + 1;
}
}
return A.Max() + 1;
}
JavaScript solution using Hash Table with O(n) time complexity.
function solution(A) {
let hashTable = {}
for (let item of A) {
hashTable[item] = true
}
let answer = 1
while(true) {
if(!hashTable[answer]) {
return answer
}
answer++
}
}
The Simplest solution for C# would be:
int value = 1;
int min = A.Min();
int max = A.Max();
if (A.Length == 0) return value = 1;
if (min < 0 && max < 0) return value = 1;
List<int> range = Enumerable.Range(1, max).ToList();
List<int> current = A.ToList();
List<int> valid = range.Except(current).ToList();
if (valid.Count() == 0)
{
max++;
return value = max;
}
else
{
return value = valid.Min();
}
Considering that the array should start from 1 or if it needs to start from the minimum value than the Enumerable.range should start from Min
MissingInteger solution in C
int solution(int A[], int N) {
int i=0,r[N];
memset(r,0,(sizeof(r)));
for(i=0;i<N;i++)
{
if(( A[i] > 0) && (A[i] <= N)) r[A[i]-1]=A[i];
}
for(i=0;i<N;i++)
{
if( r[i] != (i+1)) return (i+1);
}
return (N+1);
}
My solution for it:
public static int solution()
{
var A = new[] { -1000000, 1000000 }; // You can try with different integers
A = A.OrderBy(i => i).ToArray(); // We sort the array first
if (A.Length == 1) // if there is only one item in the array
{
if (A[0]<0 || A[0] > 1)
return 1;
if (A[0] == 1)
return 2;
}
else // if there are more than one item in the array
{
for (var i = 0; i < A.Length - 1; i++)
{
if (A[i] >= 1000000) continue; // if it's bigger than 1M
if (A[i] < 0 || (A[i] + 1) >= (A[i + 1])) continue; //if it's smaller than 0, if the next integer is bigger or equal to next integer in the sequence continue searching.
if (1 < A[0]) return 1;
return A[i] + 1;
}
}
if (1 < A[0] || A[A.Length - 1] + 1 == 0 || A[A.Length - 1] + 1 > 1000000)
return 1;
return A[A.Length-1] +1;
}
class Solution {
public int solution(int[] A) {
int size=A.length;
int small,big,temp;
for (int i=0;i<size;i++){
for(int j=0;j<size;j++){
if(A[i]<A[j]){
temp=A[j];
A[j]=A[i];
A[i]=temp;
}
}
}
int z=1;
for(int i=0;i<size;i++){
if(z==A[i]){
z++;
}
//System.out.println(a[i]);
}
return z;
}
enter code here
}
In C# you can solve the problem by making use of built in library functions. How ever the performance is low for very large integers
public int solution(int[] A)
{
var numbers = Enumerable.Range(1, Math.Abs(A.Max())+1).ToArray();
return numbers.Except(A).ToArray()[0];
}
Let me know if you find a better solution performance wise
C# - MissingInteger
Find the smallest missing integer between 1 - 1000.000.
Assumptions of the OP take place
TaskScore/Correctness/Performance: 100%
using System;
using System.Linq;
namespace TestConsole
{
class Program
{
static void Main(string[] args)
{
var A = new int[] { -122, -5, 1, 2, 3, 4, 5, 6, 7 }; // 8
var B = new int[] { 1, 3, 6, 4, 1, 2 }; // 5
var C = new int[] { -1, -3 }; // 1
var D = new int[] { -3 }; // 1
var E = new int[] { 1 }; // 2
var F = new int[] { 1000000 }; // 1
var x = new int[][] { A, B, C, D, E, F };
x.ToList().ForEach((arr) =>
{
var s = new Solution();
Console.WriteLine(s.solution(arr));
});
Console.ReadLine();
}
}
// ANSWER/SOLUTION
class Solution
{
public int solution(int[] A)
{
// clean up array for negatives and duplicates, do sort
A = A.Where(entry => entry > 0).Distinct().OrderBy(it => it).ToArray();
int lowest = 1, aLength = A.Length, highestIndex = aLength - 1;
for (int i = 0; i < aLength; i++)
{
var currInt = A[i];
if (currInt > lowest) return lowest;
if (i == highestIndex) return ++lowest;
lowest++;
}
return 1;
}
}
}
Got 100% - C# Efficient Solution
public int solution (int [] A){
int len = A.Length;
HashSet<int> realSet = new HashSet<int>();
HashSet<int> perfectSet = new HashSet<int>();
int i = 0;
while ( i < len)
{
realSet.Add(A[i]); //convert array to set to get rid of duplicates, order int's
perfectSet.Add(i + 1); //create perfect set so can find missing int
i++;
}
perfectSet.Add(i + 1);
if (realSet.All(item => item < 0))
return 1;
int notContains =
perfectSet.Except(realSet).Where(item=>item!=0).FirstOrDefault();
return notContains;
}
class Solution {
public int solution(int[] a) {
int smallestPositive = 1;
while(a.Contains(smallestPositive)) {
smallestPositive++;
}
return smallestPositive;
}
}
Well, this is a new winner now. At least on C# and my laptop. It's 1.5-2 times faster than the previous champion and 3-10 times faster, than most of the other solutions. The feature (or a bug?) of this solution is that it uses only basic data types. Also 100/100 on Codility.
public int Solution(int[] A)
{
bool[] B = new bool[(A.Length + 1)];
for (int i = 0; i < A.Length; i++)
{
if ((A[i] > 0) && (A[i] <= A.Length))
B[A[i]] = true;
}
for (int i = 1; i < B.Length; i++)
{
if (!B[i])
return i;
}
return A.Length + 1;
}
Simple C++ solution. No additional memory need, time execution order O(N*log(N)):
int solution(vector<int> &A) {
sort (A.begin(), A.end());
int prev = 0; // the biggest integer greater than 0 found until now
for( auto it = std::begin(A); it != std::end(A); it++ ) {
if( *it > prev+1 ) break;// gap found
if( *it > 0 ) prev = *it; // ignore integers smaller than 1
}
return prev+1;
}
int[] A = {1, 3, 6, 4, 1, 2};
Set<Integer> integers = new TreeSet<>();
for (int i = 0; i < A.length; i++) {
if (A[i] > 0) {
integers.add(A[i]);
}
}
Integer[] arr = integers.toArray(new Integer[0]);
final int[] result = {Integer.MAX_VALUE};
final int[] prev = {0};
final int[] curr2 = {1};
integers.stream().forEach(integer -> {
if (prev[0] + curr2[0] == integer) {
prev[0] = integer;
} else {
result[0] = prev[0] + curr2[0];
}
});
if (Integer.MAX_VALUE == result[0]) result[0] = arr[arr.length-1] + 1;
System.out.println(result[0]);
I was surprised but this was a good lesson. LINQ IS SLOW. my answer below got me 11%
public int solution (int [] A){
if (Array.FindAll(A, x => x >= 0).Length == 0) {
return 1;
} else {
var lowestValue = A.Where(x => Array.IndexOf(A, (x+1)) == -1).Min();
return lowestValue + 1;
}
}
I think I kinda look at this a bit differently but gets a 100% evaluation. Also, I used no library:
public static int Solution(int[] A)
{
var arrPos = new int[1_000_001];
for (int i = 0; i < A.Length; i++)
{
if (A[i] >= 0)
arrPos[A[i]] = 1;
}
for (int i = 1; i < arrPos.Length; i++)
{
if (arrPos[i] == 0)
return i;
}
return 1;
}
public int solution(int[] A) {
// write your code in Java SE 8
Set<Integer> elements = new TreeSet<Integer>();
long lookFor = 1;
for (int i = 0; i < A.length; i++) {
elements.add(A[i]);
}
for (Integer integer : elements) {
if (integer == lookFor)
lookFor += 1;
}
return (int) lookFor;
}
I tried to use recursion in C# instead of sorting, because I thought it would show more coding skill to do it that way, but on the scaling tests it didn't preform well on large performance tests. Suppose it's best to just do the easy way.
class Solution {
public int lowest=1;
public int solution(int[] A) {
// write your code in C# 6.0 with .NET 4.5 (Mono)
if (A.Length < 1)
return 1;
for (int i=0; i < A.Length; i++){
if (A[i]==lowest){
lowest++;
solution(A);
}
}
return lowest;
}
}
Here is my solution in javascript
function solution(A) {
// write your code in JavaScript (Node.js 8.9.4)
let result = 1;
let haveFound = {}
let len = A.length
for (let i=0;i<len;i++) {
haveFound[`${A[i]}`] = true
}
while(haveFound[`${result}`]) {
result++
}
return result
}
class Solution {
public int solution(int[] A) {
var sortedList = A.Where(x => x > 0).Distinct().OrderBy(x => x).ToArray();
var output = 1;
for (int i = 0; i < sortedList.Length; i++)
{
if (sortedList[i] != output)
{
return output;
}
output++;
}
return output;
}
}
You should just use a HashSet as its look up time is also constant instead of a dictionary. The code is less and cleaner.
public int solution (int [] A){
int answer = 1;
var set = new HashSet<int>(A);
while (set.Contains(answer)){
answer++;
}
return answer;
}
This snippet should work correctly.
using System;
using System.Collections.Generic;
public class Program
{
public static void Main()
{
int result = 1;
List<int> lst = new List<int>();
lst.Add(1);
lst.Add(2);
lst.Add(3);
lst.Add(18);
lst.Add(4);
lst.Add(1000);
lst.Add(-1);
lst.Add(-1000);
lst.Sort();
foreach(int curVal in lst)
{
if(curVal <=0)
result=1;
else if(!lst.Contains(curVal+1))
{
result = curVal + 1 ;
}
Console.WriteLine(result);
}
}
}
I have a list of Offers, from which I want to create "chains" (e.g. permutations) with limited chain lengths.
I've gotten as far as creating the permutations using the Kw.Combinatorics project.
However, the default behavior creates permutations in the length of the list count. I'm not sure how to limit the chain lengths to 'n'.
Here's my current code:
private static List<List<Offers>> GetPerms(List<Offers> list, int chainLength)
{
List<List<Offers>> response = new List<List<Offers>>();
foreach (var row in new Permutation(list.Count).GetRows())
{
List<Offers> innerList = new List<Offers>();
foreach (var mix in Permutation.Permute(row, list))
{
innerList.Add(mix);
}
response.Add(innerList);
innerList = new List<Offers>();
}
return response;
}
Implemented by:
List<List<AdServer.Offers>> lst = GetPerms(offers, 2);
I'm not locked in KWCombinatorics if someone has a better solution to offer.
Here's another implementation which I think should be faster than the accepted answer (and it's definitely less code).
public static IEnumerable<IEnumerable<T>> GetVariationsWithoutDuplicates<T>(IList<T> items, int length)
{
if (length == 0 || !items.Any()) return new List<List<T>> { new List<T>() };
return from item in items.Distinct()
from permutation in GetVariationsWithoutDuplicates(items.Where(i => !EqualityComparer<T>.Default.Equals(i, item)).ToList(), length - 1)
select Prepend(item, permutation);
}
public static IEnumerable<IEnumerable<T>> GetVariations<T>(IList<T> items, int length)
{
if (length == 0 || !items.Any()) return new List<List<T>> { new List<T>() };
return from item in items
from permutation in GetVariations(Remove(item, items).ToList(), length - 1)
select Prepend(item, permutation);
}
public static IEnumerable<T> Prepend<T>(T first, IEnumerable<T> rest)
{
yield return first;
foreach (var item in rest) yield return item;
}
public static IEnumerable<T> Remove<T>(T item, IEnumerable<T> from)
{
var isRemoved = false;
foreach (var i in from)
{
if (!EqualityComparer<T>.Default.Equals(item, i) || isRemoved) yield return i;
else isRemoved = true;
}
}
On my 3.1 GHz Core 2 Duo, I tested with this:
public static void Test(Func<IList<int>, int, IEnumerable<IEnumerable<int>>> getVariations)
{
var max = 11;
var timer = System.Diagnostics.Stopwatch.StartNew();
for (int i = 1; i < max; ++i)
for (int j = 1; j < i; ++j)
getVariations(MakeList(i), j).Count();
timer.Stop();
Console.WriteLine("{0,40}{1} ms", getVariations.Method.Name, timer.ElapsedMilliseconds);
}
// Make a list that repeats to guarantee we have duplicates
public static IList<int> MakeList(int size)
{
return Enumerable.Range(0, size/2).Concat(Enumerable.Range(0, size - size/2)).ToList();
}
Unoptimized
GetVariations 11894 ms
GetVariationsWithoutDuplicates 9 ms
OtherAnswerGetVariations 22485 ms
OtherAnswerGetVariationsWithDuplicates 243415 ms
With compiler optimizations
GetVariations 9667 ms
GetVariationsWithoutDuplicates 8 ms
OtherAnswerGetVariations 19739 ms
OtherAnswerGetVariationsWithDuplicates 228802 ms
You're not looking for a permutation, but for a variation. Here is a possible algorithm. I prefer iterator methods for functions that can potentially return very many elements. This way, the caller can decide if he really needs all elements:
IEnumerable<IList<T>> GetVariations<T>(IList<T> offers, int length)
{
var startIndices = new int[length];
var variationElements = new HashSet<T>(); //for duplicate detection
while (startIndices[0] < offers.Count)
{
var variation = new List<T>(length);
var valid = true;
for (int i = 0; i < length; ++i)
{
var element = offers[startIndices[i]];
if (variationElements.Contains(element))
{
valid = false;
break;
}
variation.Add(element);
variationElements.Add(element);
}
if (valid)
yield return variation;
//Count up the indices
startIndices[length - 1]++;
for (int i = length - 1; i > 0; --i)
{
if (startIndices[i] >= offers.Count)
{
startIndices[i] = 0;
startIndices[i - 1]++;
}
else
break;
}
variationElements.Clear();
}
}
The idea for this algorithm is to use a number in offers.Count base. For three offers, all digits are in the range 0-2. We then basically increment this number step by step and return the offers that reside at the specified indices. If you want to allow duplicates, you can remove the check and the HashSet<T>.
Update
Here is an optimized variant that does the duplicate check on the index level. In my tests it is a lot faster than the previous variant:
IEnumerable<IList<T>> GetVariations<T>(IList<T> offers, int length)
{
var startIndices = new int[length];
for (int i = 0; i < length; ++i)
startIndices[i] = i;
var indices = new HashSet<int>(); // for duplicate check
while (startIndices[0] < offers.Count)
{
var variation = new List<T>(length);
for (int i = 0; i < length; ++i)
{
variation.Add(offers[startIndices[i]]);
}
yield return variation;
//Count up the indices
AddOne(startIndices, length - 1, offers.Count - 1);
//duplicate check
var check = true;
while (check)
{
indices.Clear();
for (int i = 0; i <= length; ++i)
{
if (i == length)
{
check = false;
break;
}
if (indices.Contains(startIndices[i]))
{
var unchangedUpTo = AddOne(startIndices, i, offers.Count - 1);
indices.Clear();
for (int j = 0; j <= unchangedUpTo; ++j )
{
indices.Add(startIndices[j]);
}
int nextIndex = 0;
for(int j = unchangedUpTo + 1; j < length; ++j)
{
while (indices.Contains(nextIndex))
nextIndex++;
startIndices[j] = nextIndex++;
}
break;
}
indices.Add(startIndices[i]);
}
}
}
}
int AddOne(int[] indices, int position, int maxElement)
{
//returns the index of the last element that has not been changed
indices[position]++;
for (int i = position; i > 0; --i)
{
if (indices[i] > maxElement)
{
indices[i] = 0;
indices[i - 1]++;
}
else
return i;
}
return 0;
}
If I got you correct here is what you need
this will create permutations based on the specified chain limit
public static List<List<T>> GetPerms<T>(List<T> list, int chainLimit)
{
if (list.Count() == 1)
return new List<List<T>> { list };
return list
.Select((outer, outerIndex) =>
GetPerms(list.Where((inner, innerIndex) => innerIndex != outerIndex).ToList(), chainLimit)
.Select(perms => (new List<T> { outer }).Union(perms).Take(chainLimit)))
.SelectMany<IEnumerable<IEnumerable<T>>, List<T>>(sub => sub.Select<IEnumerable<T>, List<T>>(s => s.ToList()))
.Distinct(new PermComparer<T>()).ToList();
}
class PermComparer<T> : IEqualityComparer<List<T>>
{
public bool Equals(List<T> x, List<T> y)
{
return x.SequenceEqual(y);
}
public int GetHashCode(List<T> obj)
{
return (int)obj.Average(o => o.GetHashCode());
}
}
and you'll call it like this
List<List<AdServer.Offers>> lst = GetPerms<AdServer.Offers>(offers, 2);
I made this function is pretty generic so you may use it for other purpose too
eg
List<string> list = new List<string>(new[] { "apple", "banana", "orange", "cherry" });
List<List<string>> perms = GetPerms<string>(list, 2);
result
How to get the most common value in an Int array using C#
eg: Array has the following values: 1, 1, 1, 2
Ans should be 1
var query = (from item in array
group item by item into g
orderby g.Count() descending
select new { Item = g.Key, Count = g.Count() }).First();
For just the value and not the count, you can do
var query = (from item in array
group item by item into g
orderby g.Count() descending
select g.Key).First();
Lambda version on the second:
var query = array.GroupBy(item => item).OrderByDescending(g => g.Count()).Select(g => g.Key).First();
Some old fashioned efficient looping:
var cnt = new Dictionary<int, int>();
foreach (int value in theArray) {
if (cnt.ContainsKey(value)) {
cnt[value]++;
} else {
cnt.Add(value, 1);
}
}
int mostCommonValue = 0;
int highestCount = 0;
foreach (KeyValuePair<int, int> pair in cnt) {
if (pair.Value > highestCount) {
mostCommonValue = pair.Key;
highestCount = pair.Value;
}
}
Now mostCommonValue contains the most common value, and highestCount contains how many times it occured.
I know this post is old, but someone asked me the inverse of this question today.
LINQ Grouping
sourceArray.GroupBy(value => value).OrderByDescending(group => group.Count()).First().First();
Temp Collection, similar to Guffa's:
var counts = new Dictionary<int, int>();
foreach (var i in sourceArray)
{
if (!counts.ContainsKey(i)) { counts.Add(i, 0); }
counts[i]++;
}
return counts.OrderByDescending(kv => kv.Value).First().Key;
public static int get_occure(int[] a)
{
int[] arr = a;
int c = 1, maxcount = 1, maxvalue = 0;
int result = 0;
for (int i = 0; i < arr.Length; i++)
{
maxvalue = arr[i];
for (int j = 0; j <arr.Length; j++)
{
if (maxvalue == arr[j] && j != i)
{
c++;
if (c > maxcount)
{
maxcount = c;
result = arr[i];
}
}
else
{
c=1;
}
}
}
return result;
}
Maybe O(n log n), but fast:
sort the array a[n]
// assuming n > 0
int iBest = -1; // index of first number in most popular subset
int nBest = -1; // popularity of most popular number
// for each subset of numbers
for(int i = 0; i < n; ){
int ii = i; // ii = index of first number in subset
int nn = 0; // nn = count of numbers in subset
// for each number in subset, count it
for (; i < n && a[i]==a[ii]; i++, nn++ ){}
// if the subset has more numbers than the best so far
// remember it as the new best
if (nBest < nn){nBest = nn; iBest = ii;}
}
// print the most popular value and how popular it is
print a[iBest], nBest
Yet another solution with linq:
static int[] GetMostCommonIntegers(int[] nums)
{
return nums
.ToLookup(n => n)
.ToLookup(l => l.Count(), l => l.Key)
.OrderBy(l => l.Key)
.Last()
.ToArray();
}
This solution can handle case when several numbers have the same number of occurences:
[1,4,5,7,1] => [1]
[1,1,2,2,3,4,5] => [1,2]
[6,6,6,2,2,1] => [6]