I need ideas for a simple, lightweight way to get the max and min values of an array of doubles. The trick is I only need the max and min between two indexes in the array.
The built-in arrayOfDoubles.Max() and arrayOfDoubles.Min() will not work, because they check the entire array.
This code will be running constantly and needs to be somewhat efficient. That said, simplicity and readability are more important than speed alone.
Here's one way to get the max and min between two indexes:
double[] array = new double[8] { 3, 1, 15, 5, 11, 9, 13, 7 };
int startIndex = 3;
int endIndex = 6;
// Get max and min between these two indexes in the array
double max = GetMax(array, startIndex, endIndex);
double min = GetMin(array, startIndex, endIndex);
Console.WriteLine("Max = " + max + ", Min = " + min);
Here is the code for GetMax, and GetMin would be very similar:
private static double GetMax(double[] array, int startIndex, int endIndex)
{
double max = double.MinValue;
for (int i = startIndex; i <= endIndex; i++)
{
// Increase max with each bigger value
max = Math.Max(max, array[i]);
}
return max;
}
And the output: Max = 13, Min = 5
The Question: What other ways could I get the same result that might be simpler and wouldn't require too much overhead?
var list = array.Skip(startIndex).Take(endIndex - startIndex + 1).ToList();
var min = list.Min();
var max = list.Max();
You can also do:
var segment = new ArraySegment<double>(array, startIndex, endIndex - startIndex + 1);
var min = segment.Min();
var max = segment.Max();
It should be obvious that finding the minimum and maximum elements in an unsorted array requires an O(n) algorithm because no matter how you do it you do have to examine each element, at least once. So the only optimizations you can hope for are implementation specific and can only bring you gains in terms of the constants.
There is however a clever trick you can employ for finding both minimum and maximum in the array using only 3*[n/2] comparisons. While this is not an asymptotic improvement, it is an improvement of n/2 compared to the typical 2*n comparison that are needed in the straight forward algorithm. So, if you expect to be performing lots of the min/max computations then it may be worth considering it as an option.
The way you do it is like this:
maintain both minimum and maximum elements so far in two variables
at each iteration:
compare the next two elements between each other to determine which one is larger
compare the larger element with the maximum (and swap if necessary)
compare the smaller element with the minimum (and swap if necessary)
You will be executing n/2 iterations of that loop, and 3 comparisons for each iteration for a total of 3*[n/2] operations.
Here's an implementation:
void GetMinMax(double[] array, int start, int end, out double min, out double max)
{
min = array[start];
max = array[start];
if((end - start) % 2 == 0) // if there's an odd number of elements
start = start + 1; // skip the first one
for (int i = start + 1; i <= end; i += 2)
{
if(array[i] > array[i-1])
{
if(array[i] > max) max = array[i];
if(array[i - 1] < min) min = array[i - 1];
} else {
if(array[i - 1] > max) max = array[i - 1];
if(array[i] < min) min = array[i];
}
}
}
You can do this by scanning the array once. This is the standard min/max tracking algorithm that is asked in a bazillion interview questions, so readability shouldn't be an issue.
void GetMinMax(double[] array, int start, int end, out double min, out double max)
{
min = Double.MaxValue;
max = Double.MinValue;
// TODO: error checks for out-of-bounds, null, etc...
for (int i = start; i <= end; ++i)
{
if (array[i] > max) max = array[i];
if (array[i] < min) min = array[i];
}
}
Edit:
Using Usman Zafar's answer to simplify and remove the need for several error checks:
void GetMinMax(IList<double> list, out double min, out double max)
{
// TODO: null/empty check for list.
min = Double.MaxValue;
max = Double.MinValue;
for (int i = 0; i < list.Count; ++i)
{
if (list[i] > max) max = list[i];
if (list[i] < min) min = list[i];
}
}
Now call with any IList<double>, or in your case: new ArraySegment<double>(array, 3, 4).
using the sweet drug that is LINQ:
var res = array.Skip(startIndex).Take(1 + endIndex - startIndex );
var min = res.Min();
var max = res.Max();
Related
I've been trying to do the following, I have a max value and a int, I want to split that int like this:
Max = 10
Int = 45
Result = [10, 10, 10, 10, 5]
I already search a lot and I didn't find nothing like the thing I want to do, and my head hurts for thinking and trying to do it.
Thank you for any help!
You just need to repeat the max the number of times it divides into your value. Then if it does not divide evenly add the remainder.
int value = 45;
int max = 10;
var results = Enumerable.Repeat(max, value/max).ToList();
if(value % max != 0)
results.Add(value % max);
Console.WriteLine(string.Join(",", results));
I think LINQ is the nicest way to do this, but you could also have a straightforward loop approach that finds how many times the size fits into your max value, then add these numbers to a List<int>, and add the leftover(if any) to the list at the end.
var size = 10;
var max = 45;
// Find how many times the size fits and leftover
var goesInto = max / size;
var leftover = max % size;
var result = new List<int>();
// Add the sizes that fit in first
for (var i = 0; i < goesInto; i++)
{
result.Add(size);
}
// Add leftover size at the end.
if (leftover > 0)
{
result.Add(leftover);
}
That looks a lot like Pseudo code. We however will write C# code, as that was the langauge tag.
//Need a list, or have to calculate the expected lenght. List is easier.
List<int> ResultList = new List<int>();
//Make a copy to work with
int temp = value;
//Now let us math down towards 0
while(temp>0){
//All those multiples of Max are added first
if(temp >= Max){
ResultList.Add(Max);
temp -= Max;
}
//We are down to the rest, here
else{
//If the rest is not 0, you can add it too
if(temp > 0){
ResultList.Add(temp);
temp = 0;
}
}
}
Here how you do in plain code without LINQ, List, etc. This will also take care of negatives
int val = -45; // negative
int max = 10;
int count = Math.Abs(val / max);
int rem = Math.Abs(val % max);
var output = new int[count + (rem == 0 ? 0 : 1)];
for(int i = 0; i < output.Length ; i++)
{
if (i == output.Length - 1)
output[i] = rem;
else
output[i] = max;
Console.WriteLine(output[i]);
}
return output;
10
10
10
10
5
I would like to find distinct random numbers within a range that sums up to given number.
Note: I found similar questions in stackoverflow, however they do not address exactly this problem (ie they do not consider a negative lowerLimit for the range).
If I wanted that the sum of my random number was equal to 1 I just generate the required random numbers, compute the sum and divided each of them by the sum; however here I need something a bit different; I will need my random numbers to add up to something different than 1 and still my random numbers must be within a given range.
Example: I need 30 distinct random numbers (non integers) between -50 and 50 where the sum of the 30 generated numbers must be equal to 300; I wrote the code below, however it will not work when n is much larger than the range (upperLimit - lowerLimit), the function could return numbers outside the range [lowerLimit - upperLimit]. Any help to improve the current solution?
static void Main(string[] args)
{
var listWeights = GetRandomNumbersWithConstraints(30, 50, -50, 300);
}
private static List<double> GetRandomNumbersWithConstraints(int n, int upperLimit, int lowerLimit, int sum)
{
if (upperLimit <= lowerLimit || n < 1)
throw new ArgumentOutOfRangeException();
Random rand = new Random(Guid.NewGuid().GetHashCode());
List<double> weight = new List<double>();
for (int k = 0; k < n; k++)
{
//multiply by rand.NextDouble() to avoid duplicates
double temp = (double)rand.Next(lowerLimit, upperLimit) * rand.NextDouble();
if (weight.Contains(temp))
k--;
else
weight.Add(temp);
}
//divide each element by the sum
weight = weight.ConvertAll<double>(x => x / weight.Sum()); //here the sum of my weight will be 1
return weight.ConvertAll<double>(x => x * sum);
}
EDIT - to clarify
Running the current code will generate the following 30 numbers that add up to 300. However those numbers are not within -50 and 50
-4.425315699
67.70219958
82.08592061
46.54014109
71.20352208
-9.554070146
37.65032717
-75.77280868
24.68786878
30.89874589
142.0796933
-1.964407284
9.831226893
-15.21652248
6.479463312
49.61283063
118.1853036
-28.35462683
49.82661159
-65.82706541
-29.6865969
-54.5134262
-56.04708803
-84.63783048
-3.18402453
-13.97935982
-44.54265204
112.774348
-2.911427266
-58.94098071
Ok, here how it could be done
We will use Dirichlet Distribution, which is distribution for random numbers xi in the range [0...1] such that
Sumi xi = 1
So, after linear rescaling condition for sum would be satisfied automatically. Dirichlet distribution is parametrized by αi, but we assume all RN to be from the same marginal distribution, so there is only one parameter α for each and every index.
For reasonable large value of α, mean value of sampled random numbers would be =1/n, and variance ~1/(n * α), so larger α lead to random value more close to the mean.
Ok, now back to rescaling,
vi = A + B*xi
And we have to get A and B. As #HansKesting rightfully noted, with only two free parameters we could satisfy only two constraints, but you have three. So we would strictly satisfy low bound constraint, sum value constraint, but occasionally violate upper bound constraint. In such case we just throw whole sample away and do another one.
Again, we have a knob to turn, α getting larger means we are close to mean values and less likely to hit upper bound. With α = 1 I'm rarely getting any good sample, but with α = 10 I'm getting close to 40% of good samples. With α = 16 I'm getting close to 80% of good samples.
Dirichlet sampling is done via Gamma distribution, using code from MathDotNet.
Code, tested with .NET Core 2.1
using System;
using MathNet.Numerics.Distributions;
using MathNet.Numerics.Random;
class Program
{
static void SampleDirichlet(double alpha, double[] rn)
{
if (rn == null)
throw new ArgumentException("SampleDirichlet:: Results placeholder is null");
if (alpha <= 0.0)
throw new ArgumentException($"SampleDirichlet:: alpha {alpha} is non-positive");
int n = rn.Length;
if (n == 0)
throw new ArgumentException("SampleDirichlet:: Results placeholder is of zero size");
var gamma = new Gamma(alpha, 1.0);
double sum = 0.0;
for(int k = 0; k != n; ++k) {
double v = gamma.Sample();
sum += v;
rn[k] = v;
}
if (sum <= 0.0)
throw new ApplicationException($"SampleDirichlet:: sum {sum} is non-positive");
// normalize
sum = 1.0 / sum;
for(int k = 0; k != n; ++k) {
rn[k] *= sum;
}
}
static bool SampleBoundedDirichlet(double alpha, double sum, double lo, double hi, double[] rn)
{
if (rn == null)
throw new ArgumentException("SampleDirichlet:: Results placeholder is null");
if (alpha <= 0.0)
throw new ArgumentException($"SampleDirichlet:: alpha {alpha} is non-positive");
if (lo >= hi)
throw new ArgumentException($"SampleDirichlet:: low {lo} is larger than high {hi}");
int n = rn.Length;
if (n == 0)
throw new ArgumentException("SampleDirichlet:: Results placeholder is of zero size");
double mean = sum / (double)n;
if (mean < lo || mean > hi)
throw new ArgumentException($"SampleDirichlet:: mean value {mean} is not within [{lo}...{hi}] range");
SampleDirichlet(alpha, rn);
bool rc = true;
for(int k = 0; k != n; ++k) {
double v = lo + (mean - lo)*(double)n * rn[k];
if (v > hi)
rc = false;
rn[k] = v;
}
return rc;
}
static void Main(string[] args)
{
double[] rn = new double [30];
double lo = -50.0;
double hi = 50.0;
double alpha = 10.0;
double sum = 300.0;
for(int k = 0; k != 1_000; ++k) {
var q = SampleBoundedDirichlet(alpha, sum, lo, hi, rn);
Console.WriteLine($"Rng(BD), v = {q}");
double s = 0.0;
foreach(var r in rn) {
Console.WriteLine($"Rng(BD), r = {r}");
s += r;
}
Console.WriteLine($"Rng(BD), summa = {s}");
}
}
}
UPDATE
Usually, when people ask such question, there is an implicit assumption/requirement - all random numbers shall be distribution in the same way. It means that if I draw marginal probability density function (PDF) for item indexed 0 from the sampled array, I shall get the same distribution as I draw marginal probability density function for the last item in the array. People usually sample random arrays to pass it down to other routines to do some interesting stuff. If marginal PDF for item 0 is different from marginal PDF for last indexed item, then just reverting array will produce wildly different result with the code which uses such random values.
Here I plotted distributions of random numbers for item 0 and last item (#29) for original conditions([-50...50] sum=300), using my sampling routine. Look similar, isn't it?
Ok, here is a picture from your sampling routine, same original conditions([-50...50] sum=300), same number of samples
UPDATE II
User supposed to check return value of the sampling routine and accept and use sampled array if (and only if) return value is true. This is acceptance/rejection method. As an illustration, below is code used to histogram samples:
int[] hh = new int[100]; // histogram allocated
var s = 1.0; // step size
int k = 0; // good samples counter
for( ;; ) {
var q = SampleBoundedDirichlet(alpha, sum, lo, hi, rn);
if (q) // good sample, accept it
{
var v = rn[0]; // any index, 0 or 29 or ....
var i = (int)((v - lo) / s);
i = System.Math.Max(i, 0);
i = System.Math.Min(i, hh.Length-1);
hh[i] += 1;
++k;
if (k == 100000) // required number of good samples reached
break;
}
}
for(k = 0; k != hh.Length; ++k)
{
var x = lo + (double)k * s + 0.5*s;
var v = hh[k];
Console.WriteLine($"{x} {v}");
}
Here you go. It'll probably run for centuries before actually returning the list, but it'll comply :)
public List<double> TheThing(int qty, double lowest, double highest, double sumto)
{
if (highest * qty < sumto)
{
throw new Exception("Impossibru!");
// heresy
highest = sumto / 1 + (qty * 2);
lowest = -highest;
}
double rangesize = (highest - lowest);
Random r = new Random();
List<double> ret = new List<double>();
while (ret.Sum() != sumto)
{
if (ret.Count > 0)
ret.RemoveAt(0);
while (ret.Count < qty)
ret.Add((r.NextDouble() * rangesize) + lowest);
}
return ret;
}
I come up with this solution which is fast. I am sure it couldbe improved, but for the moment it does the job.
n = the number of random numbers that I will need to find
Constraints
the n random numbers must add up to finalSum the n random numbers
the n random numbers must be within lowerLimit and upperLimit
The idea is to remove from the initial list (that sums up to finalSum) of random numbers the numbers outside the range [lowerLimit, upperLimit].
Then count the number left of the list (called nValid) and their sum (called sumOfValid).
Now, iteratively search for (n-nValid) random numbers within the range [lowerLimit, upperLimit] whose sum is (finalSum-sumOfValid)
I tested it with several combinations for the inputs variables (including negative sum) and the results looks good.
static void Main(string[] args)
{
int n = 100;
int max = 5000;
int min = -500000;
double finalSum = -1000;
for (int i = 0; i < 5000; i++)
{
var listWeights = GetRandomNumbersWithConstraints(n, max, min, finalSum);
Console.WriteLine("=============");
Console.WriteLine("sum = " + listWeights.Sum());
Console.WriteLine("max = " + listWeights.Max());
Console.WriteLine("min = " + listWeights.Min());
Console.WriteLine("count = " + listWeights.Count());
}
}
private static List<double> GetRandomNumbersWithConstraints(int n, int upperLimit, int lowerLimit, double finalSum, int precision = 6)
{
if (upperLimit <= lowerLimit || n < 1) //todo improve here
throw new ArgumentOutOfRangeException();
Random rand = new Random(Guid.NewGuid().GetHashCode());
List<double> randomNumbers = new List<double>();
int adj = (int)Math.Pow(10, precision);
bool flag = true;
List<double> weights = new List<double>();
while (flag)
{
foreach (var d in randomNumbers.Where(x => x <= upperLimit && x >= lowerLimit).ToList())
{
if (!weights.Contains(d)) //only distinct
weights.Add(d);
}
if (weights.Count() == n && weights.Max() <= upperLimit && weights.Min() >= lowerLimit && Math.Round(weights.Sum(), precision) == finalSum)
return weights;
/* worst case - if the largest sum of the missing elements (ie we still need to find 3 elements,
* then the largest sum is 3*upperlimit) is smaller than (finalSum - sumOfValid)
*/
if (((n - weights.Count()) * upperLimit < (finalSum - weights.Sum())) ||
((n - weights.Count()) * lowerLimit > (finalSum - weights.Sum())))
{
weights = weights.Where(x => x != weights.Max()).ToList();
weights = weights.Where(x => x != weights.Min()).ToList();
}
int nValid = weights.Count();
double sumOfValid = weights.Sum();
int numberToSearch = n - nValid;
double sum = finalSum - sumOfValid;
double j = finalSum - weights.Sum();
if (numberToSearch == 1 && (j <= upperLimit || j >= lowerLimit))
{
weights.Add(finalSum - weights.Sum());
}
else
{
randomNumbers.Clear();
int min = lowerLimit;
int max = upperLimit;
for (int k = 0; k < numberToSearch; k++)
{
randomNumbers.Add((double)rand.Next(min * adj, max * adj) / adj);
}
if (sum != 0 && randomNumbers.Sum() != 0)
randomNumbers = randomNumbers.ConvertAll<double>(x => x * sum / randomNumbers.Sum());
}
}
return randomNumbers;
}
I learnt about quick sort and how it can be implemented in both Recursive and Iterative method.
In Iterative method:
Push the range (0...n) into the stack
Partition the given array with a pivot
Pop the top element.
Push the partitions (index range) onto a stack if the range has more than one element
Do the above 3 steps, till the stack is empty
And the recursive version is the normal one defined in wiki.
I learnt that recursive algorithms are always slower than their iterative counterpart.
So, Which method is preferred in terms of time complexity (memory is not a concern)?
Which one is fast enough to use in Programming contest?
Is c++ STL sort() using a recursive approach?
In terms of (asymptotic) time complexity - they are both the same.
"Recursive is slower then iterative" - the rational behind this statement is because of the overhead of the recursive stack (saving and restoring the environment between calls).
However -these are constant number of ops, while not changing the number of "iterations".
Both recursive and iterative quicksort are O(nlogn) average case and O(n^2) worst case.
EDIT:
just for the fun of it I ran a benchmark with the (java) code attached to the post , and then I ran wilcoxon statistic test, to check what is the probability that the running times are indeed distinct
The results may be conclusive (P_VALUE=2.6e-34, https://en.wikipedia.org/wiki/P-value. Remember that the P_VALUE is P(T >= t | H) where T is the test statistic and H is the null hypothesis). But the answer is not what you expected.
The average of the iterative solution was 408.86 ms while of recursive was 236.81 ms
(Note - I used Integer and not int as argument to recursiveQsort() - otherwise the recursive would have achieved much better, because it doesn't have to box a lot of integers, which is also time consuming - I did it because the iterative solution has no choice but doing so.
Thus - your assumption is not true, the recursive solution is faster (for my machine and java for the very least) than the iterative one with P_VALUE=2.6e-34.
public static void recursiveQsort(int[] arr,Integer start, Integer end) {
if (end - start < 2) return; //stop clause
int p = start + ((end-start)/2);
p = partition(arr,p,start,end);
recursiveQsort(arr, start, p);
recursiveQsort(arr, p+1, end);
}
public static void iterativeQsort(int[] arr) {
Stack<Integer> stack = new Stack<Integer>();
stack.push(0);
stack.push(arr.length);
while (!stack.isEmpty()) {
int end = stack.pop();
int start = stack.pop();
if (end - start < 2) continue;
int p = start + ((end-start)/2);
p = partition(arr,p,start,end);
stack.push(p+1);
stack.push(end);
stack.push(start);
stack.push(p);
}
}
private static int partition(int[] arr, int p, int start, int end) {
int l = start;
int h = end - 2;
int piv = arr[p];
swap(arr,p,end-1);
while (l < h) {
if (arr[l] < piv) {
l++;
} else if (arr[h] >= piv) {
h--;
} else {
swap(arr,l,h);
}
}
int idx = h;
if (arr[h] < piv) idx++;
swap(arr,end-1,idx);
return idx;
}
private static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
public static void main(String... args) throws Exception {
Random r = new Random(1);
int SIZE = 1000000;
int N = 100;
int[] arr = new int[SIZE];
int[] millisRecursive = new int[N];
int[] millisIterative = new int[N];
for (int t = 0; t < N; t++) {
for (int i = 0; i < SIZE; i++) {
arr[i] = r.nextInt(SIZE);
}
int[] tempArr = Arrays.copyOf(arr, arr.length);
long start = System.currentTimeMillis();
iterativeQsort(tempArr);
millisIterative[t] = (int)(System.currentTimeMillis()-start);
tempArr = Arrays.copyOf(arr, arr.length);
start = System.currentTimeMillis();
recursvieQsort(tempArr,0,arr.length);
millisRecursive[t] = (int)(System.currentTimeMillis()-start);
}
int sum = 0;
for (int x : millisRecursive) {
System.out.println(x);
sum += x;
}
System.out.println("end of recursive. AVG = " + ((double)sum)/millisRecursive.length);
sum = 0;
for (int x : millisIterative) {
System.out.println(x);
sum += x;
}
System.out.println("end of iterative. AVG = " + ((double)sum)/millisIterative.length);
}
Recursion is NOT always slower than iteration. Quicksort is perfect example of it. The only way to do this in iterate way is create stack structure. So in other way do the same that the compiler do if we use recursion, and propably you will do this worse than compiler. Also there will be more jumps if you don't use recursion (to pop and push values to stack).
That's the solution i came up with in Javascript. I think it works.
const myArr = [33, 103, 3, 726, 200, 984, 198, 764, 9]
document.write('initial order :', JSON.stringify(myArr), '<br><br>')
qs_iter(myArr)
document.write('_Final order :', JSON.stringify(myArr))
function qs_iter(items) {
if (!items || items.length <= 1) {
return items
}
var stack = []
var low = 0
var high = items.length - 1
stack.push([low, high])
while (stack.length) {
var range = stack.pop()
low = range[0]
high = range[1]
if (low < high) {
var pivot = Math.floor((low + high) / 2)
stack.push([low, pivot])
stack.push([pivot + 1, high])
while (low < high) {
while (low < pivot && items[low] <= items[pivot]) low++
while (high > pivot && items[high] > items[pivot]) high--
if (low < high) {
var tmp = items[low]
items[low] = items[high]
items[high] = tmp
}
}
}
}
return items
}
Let me know if you found a mistake :)
Mister Jojo UPDATE :
this code just mixes values that can in rare cases lead to a sort, in other words never.
For those who have a doubt, I put it in snippet.
Suppose you have a set of values (1,1,1,12,12,16) how would you generate all possible combinations (without repetition) whose sum is within a predefined range [min,max]. For example, here are all the combinations (of all depths) that have a range between 13 and 17:
1 12
1 1 12
1 1 1 12
16
1 16
This assumes that each item of the same value is indistinguishable, so you don't have three results of 1 12 in the final output. Brute force is possible, but in situations where the number of items is large, the number of combinations at all depths is astronomical. In the example above, there are (3 + 1) * (2 + 1) * (1 + 1) = 24 combinations at all depths. Thus, the total combinations is the product of the number of items of any given value + 1. Of course we can logically throw out huge number of combinations whose partial sum is greater than the max value (e.g. the set 16 12 is already bigger than the max value of 17, so skip any combinations that have a 16 and 12 in them).
I originally thought I could convert the input array into two arrays and increment them kind of like an odometer. But I am getting completely stuck on this recursive algorithm that breaks early. Any suggestions?
{
int uniqueValues = 3;
int[] maxCounts = new int[uniqueValues];
int[] values = new int[uniqueValues];
// easy code to bin the data, just hardcoding for example
maxCounts[0] = 3;
values[0] = 1;
maxCounts[1] = 2;
values[1] = 12;
maxCounts[2] = 1;
values[2] = 16;
GenerateCombinationsHelper(new List<int[]>(), 13, 17, 0, 0, maxCounts, new int[3], values);
}
private void GenerateCombinationsHelper(List<int[]> results, int min, int max, int currentValue, int index, int[] maxValues, int[] currentCombo, int[] values)
{
if (index >= maxValues.Length)
{
return;
}
while (currentCombo[index] < maxValues[index])
{
currentValue += values[index];
if (currentValue> max)
{
return;
}
currentCombo[index]++;
if (currentValue< min)
{
GenerateCombinationsHelper(results, min, max, currentValue, index + 1, maxValues, currentCombo, values);
}
else
{
results.Add((int[])currentCombo.Clone());
}
}
}
Edit
The integer values are just for demonstration. It can be any object that has a some sort of numerical value (int, double, float, etc...)
Typically there will only be a handful of unique values (~10 or so) but there can be several thousands total items.
Switch the main call to:
GenerateCombinationsHelper2(new List<int[]>(), 13, 17, 0, maxCounts, new int[3], values);
and then add this code:
private void GenerateCombinationsHelper2(List<int[]> results, int min, int max, int index, int[] maxValues, int[] currentCombo, int[] values)
{
int max_count = Math.Min((int)Math.Ceiling((double)max / values[index]), maxValues[index]);
for(int count = 0; count <= max_count; count++)
{
currentCombo[index] = count;
if(index < currentCombo.Length - 1)
{
GenerateCombinationsHelper2(results, min, max, index + 1, maxValues, currentCombo, values);
}
else
{
int sum = Sum(currentCombo, values);
if(sum >= min && sum <= max)
{
int[] copy = new int[currentCombo.Length];
Array.Copy(currentCombo, copy, copy.Length);
results.Add(copy);
}
}
}
}
private static int Sum(int[] combo, int[] values)
{
int sum = 0;
for(int i = 0; i < combo.Length; i++)
{
sum += combo[i] * values[i];
}
return sum;
}
It returns the 5 valid answers.
The general tendency with this kind of problem is that there are relatively few values that will show up, but each value shows up many, many times. Therefore you first want to create a data structure that efficiently describes the combinations that will add up to the desired values, and only then figure out all of the combinations that do so. (If you know the term "dynamic programming", that's exactly the approach I'm describing.)
The obvious data structure in C# terms would be a Hashtable whose keys are the totals that the combination adds up to, and whose values are arrays listing the positions of the last elements that can be used in a combination that could add up to that particular total.
How do you build that data structure?
First you start with a Hashtable which contains the total 0 as a key, and an empty array as a value. Then for each element of your array you create a list of the new totals you can reach from the previous totals, and append your element's position to each one of their values (inserting a new one if needed). When you've gone through all of your elements, you have your data structure.
Now you can search that data structure just for the totals that are in the range you want. And for each such total, you can write a recursive program that will go through your data structure to produce the combinations. This step can indeed have a combinatorial explosion, but the nice thing is that EVERY combination produced is actually a combination in your final answer. So if this phase takes a long time, it is because you have a lot of final answers!
Try this algo
int arr[] = {1,1,1,12,12,16}
for(int i = 0;i<2^arr.Length;i++)
{
int[] arrBin = BinaryFormat(i); // binary format i
for(int j = 0;j<arrBin.Length;j++)
if (arrBin[j] == 1)
Console.Write("{0} ", arr[j]);
Console.WriteLine();
}
This is quite similar to the subset sum problem which just happens to be NP-complete.
Wikipedia says the following about NP-complete problems:
Although any given solution to such a problem can be verified quickly,
there is no known efficient way to locate a solution in the first
place; indeed, the most notable characteristic of NP-complete problems
is that no fast solution to them is known. That is, the time required
to solve the problem using any currently known algorithm increases
very quickly as the size of the problem grows. This means that the
time required to solve even moderately sized versions of many of these
problems can easily reach into the billions or trillions of years,
using any amount of computing power available today. As a consequence,
determining whether or not it is possible to solve these problems
quickly, called the P versus NP problem, is one of the principal
unsolved problems in computer science today.
If indeed there is a way to solve this besides brute-forcing through the powerset and finding all subsets which sum up to a value within the given range, then I would be very interested in hearing it.
An idea for another implementation:
Create from the list of numbers a list of stacks, each stack represents a number that appear in the list, and this number is pushed into the stack as many times as he appeared in the numbers list. more so, this list is sorted.
The idea is that you iterate through the stack list, in each stack you pop one number at a time if it doesn't exceed the max value and recall the function, and perform an additional call of skipping the current stack.
This algorithm reduces many redundant computations like trying to add different elements which have the same value when adding this value exceeds the maximal value.
I was able to solve pretty large problems with this algorithm (50 numbers and more), depending on the min and max values, obviously when the interval is very big the number of combinations may be huge.
Here's the code:
static void GenerateLimitedCombinations(List<int> intList, int minValue, int maxValue)
{
intList.Sort();
List<Stack<int>> StackList = new List<Stack<int>>();
Stack<int> NewStack = new Stack<int>();
NewStack.Push(intList[0]);
StackList.Add(NewStack);
for (int i = 1; i < intList.count; i++)
{
if (intList[i - 1] == intList[i])
StackList[StackList.count - 1].Push(intList[i]);
else
{
NewStack = new Stack<int>();
NewStack.Push(intList[i]);
StackList.Add(NewStack);
}
}
GenerateLimitedCombinations(StackList, minValue, maxValue, 0, new List<int>(), 0);
}
static void GenerateLimitedCombinations(List<Stack<int>> stackList, int minValue, int maxValue, int currentStack, List<int> currentCombination, int currentSum)
{
if (currentStack == stackList.count)
{
if (currentSum >= minValue)
{
foreach (int tempInt in CurrentCombination)
{
Console.Write(tempInt + " ");
}
Console.WriteLine(;
}
}
else
{
int TempSum = currentSum;
List<int> NewCombination = new List<int>(currentCombination);
Stack<int> UndoStack = new Stack<int>();
while (stackList[currentStack].Count != 0 && stackList[currentStack].Peek() + TempSum <= maxValue)
{
int AddedValue = stackList[currentStack].Pop();
UndoStack.Push(AddedValue);
NewCombination.Add(AddedValue);
TempSum += AddedValue;
GenerateLimitedCombinations(stackList, minValue, maxValue, currentStack + 1, new List<int>(NewCombination), TempSum);
}
while (UndoStack.Count != 0)
{
stackList[currentStack].Push(UndoStack.Pop());
}
GenerateLimitedCombinations(stackList, minValue, maxValue, currentStack + 1, currentCombination, currentSum);
}
}
Here's a test program:
static void Main(string[] args)
{
Random Rnd = new Random();
List<int> IntList = new List<int>();
int NumberOfInts = 10, MinValue = 19, MaxValue 21;
for (int i = 0; i < NumberOfInts; i++) { IntList.Add(Rnd.Next(1, 10));
for (int i = 0; i < NumberOfInts; i++) { Console.Write(IntList[i] + " "); } Console.WriteLine(); Console.WriteLine();
GenerateLimitedCombinations(IntList, MinValue, MaxValue);
Console.ReadKey();
}
This is my code:
SortedDictionary<int,int> Numbers = new SortedDictionary<int,int>();
List<int> onlyP = new List<int>(Numbers.Keys);
int Inferior = int.Parse(toks[0]);
int Superior = int.Parse(toks[1]);
int count = 0;
int inferiorindex = Array.BinarySearch(Numbers.Keys.ToArray(), Inferior);
if (inferiorindex < 0) inferiorindex = (inferiorindex * -1) - 1;
int superiorindex = Array.BinarySearch(Numbers.Keys.ToArray(), Superior);
if (superiorindex < 0) superiorindex = (superiorindex * -1) - 1;
count = Numbers[onlyP[superiorindex]] - Numbers[onlyP[inferiorindex]];
So what I'm trying to do is this: I've got a sorted dictionary with powers as keys, and a normal iteration as values. I've to print how many numbers of the keys fit within a specified range.
Example:
Some entries of the dict: [1,1],[4,2],[8,3],[9,4],[16,5],[25,6],[27,7],[32,8]
Limits: 2 and 10
Numbers within 2 - 10 : 4, 8, 9 = 3 numbers.
With BinarySearch I'm trying to quickly find the numbers I want and then substract Potencias[onlyP[superiorindex]] - Potencias[onlyP[inferiorindex]] to find how many numbers are within the range. Unfortunately it's not working for all the cases, and it sometimes gives less numbers than the actual amount. How can this be fixed? Thanks in advance.
[EDIT] Examples of the problems: If I select limits: 4 and 4... it returns 0, but the answer is 1.
limits: 1 and 10^9 (the whole range) returns 32669... But the answer is 32670.
The algorithm is ignoring powers.
Finally, having read the documentation. Notice the -1 on the upperIndex conversion and the +1 on the return value, these are important.
var numbers = new[] { 1, 4, 8, 9, 16, 25, 27, 32 };
var lowerBound = 4;
var upperBound = 17;
int lowerIndex = Array.BinarySearch(numbers, lowerBound);
if (lowerIndex < 0) lowerIndex = ~lowerIndex;
// - 1 here because we want the index of the item that is <= upper bound.
int upperIndex = Array.BinarySearch(numbers, upperBound);
if (upperIndex < 0) upperIndex = ~upperIndex - 1;
return (upperIndex - lowerIndex) + 1;
Explanation:
For the lower index we just take the complement because the BinarySearch returns the index of the first item >= lowerBound.
For the upper index we additionally minus one from the complement because we want the first item <= upperBound (not >= upperBound which is what BinarySearch returns).
It seems that you're not doing it the wright way for post processing the binary search return value :
http://msdn.microsoft.com/en-us/library/5kwds4b1.aspx
Should be :
if (inferiorindex < 0) inferiorindex = ~inferiorindex;
(untested)
Moreover, List supports a binary search, so you don't have to do the Array.BinarySearch thing, just work on onlyP.
int inferiorindex = Array.BinarySearch<int>(keys, Inferior);
if (inferiorindex < 0) {
inferiorindex = ~inferiorindex;
}
int superiorindex = Array.BinarySearch<int>(keys, Superior);
if (superiorindex < 0) {
// superiorindex is the binary complement of the next higher.
// -1 because we want the highest.
superiorindex = ~superiorindex - 1;
}
int count = superiorindex - inferiorindex + 1;