Randomly selecting numbers from n consecutive numbers - c#

Given an integer array of n consecutive number from 0, i.e.
0,1,2,..n
I wish to select n/2 numbers randomly.
Say n=5
Then a possible set would be 0,3,5.
How to achieve that easily?

You can loop through the numbers and determine the probability that each number should be in the result:
int n = 5;
int left = (n + 1) / 2;
int[] result = new int[left];
Random rnd = new Random();
for (int i = 0; left > 0; i++) {
if (rnd.Next(n + 1 - i) < left) {
result[result.Length - left] = i;
left--;
}
}
Note: This will always produce a sorted result.
Edit:
Here is a tests run creating 200000000 results, counting the combinations generated (where the binary number represents the combination, e.g. 100110 is 0,3,4):
010011 : 9999164
110001 : 10003346
010101 : 9990975
100101 : 9998154
101001 : 10006305
100110 : 10003350
101010 : 10000583
101100 : 9995335
011001 : 10000007
001011 : 10001492
001110 : 10001158
100011 : 9994680
110100 : 9998226
110010 : 9999954
011010 : 10002269
000111 : 10004752
010110 : 9996886
011100 : 9999196
111000 : 10001094
001101 : 10003074

The simplest way I've found of doing this is an incomplete Fisher-Yates shuffle. Stop after n/2 iterations.
The shuffle in effect works with two arrays, the randomly selected numbers and the pool of numbers that have not yet been used, and therefore are available for selection. As it happens the total size of the two arrays is the original array length, so they can be stored in place by partitioning.
After n/2 iterations, the partition representing the numbers that have been selected is a random choice from the original array.
Another way of looking at this is that the first n/2 numbers of the result of the full shuffle will not be changed by the n/2+1 or subsequent iterations of the shuffle.

Use a Fisher-Yates Shuffle, then pick the first n/2 items in the array.
As #Patricia Shanahan points out in her answer, it is only necessary to shuffle the first n/2 items in the array using Fisher-Yates.

An approach is to generate index of the vector numbers by using Random object with Next. You add a new generate value to an structure like ArrayList or HashSet to memorize that. For every new generating of the index you verify if that value is present or not in this structure.

A method that efficiently uses the properties of binary to encode the selection is :
Generate a random 32-bit unsigned int.
Do this n/32 times, and put these in an array called masks
The bits of these random ints select the number selected. Specifically if the word size is 32 then i will be selected if (masks[i/32] >> (i%32)) && 1 is true.
Since random 32 bit ints will on average have 16 0s and 16 1s this method will get close to your value of n/2.
Say the actual number of 1s you get is W and differs from n/2 by k, k = W - n/2
If you have k too few numbers selected generate a random integer in range 0 to n, go to that bit indexed by that number and change the first 0 you encounter to a 1, searching forward.
Do this k times.
In the case where you have k too many, change the first 1 you encounter to a zero.
The advantage if this is it is also the most compact structure to store you subset, with each bit selecting a single integer. You only need to range, and this array of masks, and you have your selection.
Plus you have to generate 16 times less random numbers than the other shuffle methods making it more efficient.
Finally, this increase in efficiency increases with word size. For 64 bit unsigned longs you get 32 times less random numbers required than by the "n/2 swaps permutation method"
Good luck!

Is Linq not allowed when you write algorithm?
int[] arrInts = new int[] {0, 1, 2, 3, 4, 5, 6};
var r = new Random();
var randomInts = arrInts.OrderBy(i => r.Next(arrInts.Length))
.Take(arrInts.Length/2)
.ToArray();
Edit: I am not 100% sure about performance, but it might be better to use .AsParallel() before .OrderBy().
I like using Linq because it is very readable and it took me around 5 secs to write the algorithm instead of many minutes to write sorting and looping "by hand".

Related

Efficient algorithm for finding the largest overlapping range given a list of ranges

Consider the following interface that describes a continuous range of integer values.
public interface IRange {
int Minimum { get;}
int Maximum { get;}
IRange LargestOverlapRange(IEnumerable<IRange> ranges);
}
I am looking for an efficient algorithm to find the largest overlap range given a list of IRange objects. The idea is briefly outlined in the following diagram. Where the top numbers represent the integer values, and the |-----| represent the IRange objects with a min and max value. I stacked the IRange objects so that the solution is easy to visualize.
0123456789 ... N
|-------| |------------| |-----|
|---------| |---|
|---| |------------|
|--------| |---------------|
|----------|
Here, the LargestOverlapRange method would return:
|---|
Since that range has a total of 4 'overlaps'. If there are two separate IRange with the same number of overlaps, I want to return null.
Here is some brief code of what I tried.
public class Range : IRange
{
public IRange LargestOverlapRange(IEnumerable<IRange> ranges) {
int maxInt = 20000;
// Create a histogram of the counts
int[] histogram = new int[maxInt];
foreach(IRange range in ranges) {
for(int i=range.Minimum; i <= range.Maximum; i++) {
histogram[i]++;
}
}
// Find the mode of the histogram
int mode = 0;
int bin = 0;
for(int i =0; i < maxInt; i++) {
if(histogram[i] > mode) {
mode = histogram[i];
bin = i;
}
}
// Construct a new range of the mode values, if they are continuous
Range range;
for(int i = bin; i < maxInt; i++) {
if(histogram[i] == mode) {
if(range != null)
return null; // violates two ranges with the same mode
range = new Range();
range.Minimum = i;
while(i < maxInt && histrogram[i] == mode)
i++;
range.Maximum = i;
}
}
return range;
}
}
This involves four loops and is easily O(n^2) if not higher. Is there a more efficient algorithm (speed wise) to find the largest overlap range from a list of other ranges?
EDIT
Yes, the O(n^2) is not correct, I was thinking about it incorrectly. It should be O(N * M) as was pointed out in the comments.
EDIT 2
Let me stipulate a few things, the absolute min and max values of the integer values will be from (0, 20000). Secondly, the average number of IRange will be on the order of 100. I don't know if this will change the way the algorithm is designed.
EDIT 3
I am implementing this algorithm on a scientific instrument (a mass spectrometer) in which the speed of the data processing is paramount to the quality of data (faster analysis time = more spectra collected in time T). The firmware language (proprietary) only has arrays[] and is not object orientated. I choose C# since I am decent at porting concepts between the two languages and thought that in the interest of the SO community, a good answer would have a wider audience.
Convert your list of ranges to a list of start and stop points. Sort the list with an O(n log n) algorithm. Now you can iterate through the list and increment or decrement a counter depending on whether it's a start or stop point, which will give you the current overlap depth.
As I understood OP's question, the solution given the 3 ranges
A: 012
B: 123
C: 34
would be the range 12 (a common subset of A and B), not range 123 (because it isn't a common subset of any pair).
Think about the algorithm on paper before writing any code. How about a dynamic programming solution? (If you don't know dynamic programming, it's worth reading about it in a book). The idea of dynamic programming is to build up solutions of simpler subproblems.
Let f_i(n, k) be the size of the longest interval starting at n common to at least k of the first i given ranges.
You can work out f_1 from f_0, and f_2 from f_1 and so on. Updating the functions just depends on the one extra range considered.
Suppose there are M ranges. The values of f_M will tell us the answer to your problem.
The deepest depth you talked about is the greatest k such that f_M(n, k) is non zero for some n. Let's call that maximal depth K. Then we look for the maximum of f_M(n, K) over n. Its maximum is the size of your largest range, which begins at the maximising n.
The maximising n must be the lower bound of some range, so we only need to calculate f for these kind of n. There are M ranges, so at most M lower bounds. Thus, this algorithm has complexity O(MMK).
Let the ith range be from a to b
If n is outside a to b, then no change
f_i(n,k) = f_i-1(n,k)
If n is within a to b, we test the k deep solution made by combining fresh the interval with our old k-1 deep solution. We only use it if it's better than what we already had.
f_i(n,k) = max ( f_i-1(n,k) , min( f_i-1(n,k-1) , b-n+1))
Example! For ranges 0 to 5, 2 to 6, 4 to 8, and 6 to 9.
n 0123456789
...... range 0 to 5
f_1(n,1) 6543210000
..... range 2 to 6
f_2(n,1) 6554321000
f_2(n,2) 0043210000
..... range 4 to 8
f_3(n,1) 6554543210
f_3(n,2) 0043321000
f_3(n,3) 0000210000
.... range 6 to 9
f_4(n,1) 6554544321
f_4(n,2) 0043323210
f_4(n,3) 0000211000
f_4(n,4) 0000000000
Thus the deepest depth K is 3, and the longest range is 4 to 5. We can also see that the longest range depth 2 has size 4 and starts at 3.

Determining how close an array is to the target array

I'm playing a little experiment to increase my knowledge and I have reached a part where I feel i could really optimize it, but am not quite sure how to do this.
I have many arrays of numbers. (for simplicity, lets say each array has 4 numbers: 1, 2, 3, and 4)
The target is to have all of the numbers in ascending order (ie,
1-2-3-4), but the numbers are all scrambled in the different arrays.
A higher weight is placed upon larger numbers.
I need to sort all of these arrays in order of how close they are to
the target.
Ie, 4-3-2-1 would be the worst possible case.
Some example cases:
3-4-2-1 is better than 4-3-2-1
2-3-4-1 is better than 1-4-3-2 (even though two numbers match (1 and 3).
the biggest number is closer to its spot.)
So the big numbers always take precedence over the smaller numbers. Here is my attempt:
var tmp = from m in moves
let mx = m.Max()
let ranking = m.IndexOf(s => s == mx)
orderby ranking descending
select m;
return tmp.ToArray();
P.S IndexOf in the above example, is an extension I wrote to take an array and expression, and return the index of the element that satisfies the expression. It is needed because the situation is really a little more complicated, i'm simplifying it with my example.
The problem with my attempt here though, is that it would only sort by the biggest number, and forget all of the other numbers. it SHOULD rank by biggest number first, then by second largest, then by third.
Also, since it will be doing this operation over and over again for several minutes, it should be as efficient as possible.
You could implement a bubble sort, and count the number of times you have to move data around. The number of data moves will be large on arrays that are far away from the sorted ideal.
int GetUnorderedness<T>(T[] data) where T : IComparable<T>
{
data = (T[])data.Clone(); // don't modify the input data,
// we weren't asked to actually sort.
int swapCount = 0;
bool isSorted;
do
{
isSorted = true;
for(int i = 1; i < data.Length; i++)
{
if(data[i-1].CompareTo(data[i]) > 0)
{
T temp = data[i];
data[i] = data[i-1];
data[i-1] = temp;
swapCount++;
isSorted = false;
}
}
} while(!isSorted);
}
From your sample data, this will give slightly different results than you specified.
Some example cases:
3-4-2-1 is better than 4-3-2-1
2-3-4-1 is better than 1-4-3-2
3-4-2-1 will take 5 swaps to sort, 4-3-2-1 will take 6, so that works.
2-3-4-1 will take 3, 1-4-3-2 will also take 3, so this doesn't match up with your expected results.
This algorithm doesn't treat the largest number as the most important, which it seems you want; all numbers are treated equally. From your description, you'd consider 2-1-3-4 as much better than 1-2-4-3, because the first one has both the largest and second largest numbers in their proper place. This algorithm would consider those two equal, because each requires only 1 swap to sort the array.
This algorithm does have the advantage that it's not just a comparison algorithm, each input has a discrete output, so you only need to run the algorithm once for each input array.
I hope this helps
var i = 0;
var temp = (from m in moves select m).ToArray();
do
{
temp = (from m in temp
orderby m[i] descending
select m).ToArray();
}
while (++i < moves[0].Length);

Optimizing this C# algorithm (K Difference)

This is the problem I'm solving (it's a sample problem, not a real problem):
Given N numbers , [N<=10^5] we need to count the total pairs of
numbers that have a difference of K. [K>0 and K<1e9]
Input Format: 1st line contains N & K (integers). 2nd line contains N
numbers of the set. All the N numbers are assured to be distinct.
Output Format: One integer saying the no of pairs of numbers that have
a diff K.
Sample Input #00:
5 2
1 5 3 4 2
Sample Output #00:
3
Sample Input #01:
10 1
363374326 364147530 61825163 1073065718 1281246024 1399469912 428047635 491595254 879792181 1069262793
Sample Output #01:
0
I already have a solution (and I haven't been able to optimize it as well as I had hoped). Currently my solution gets a score of 12/15 when it is run, and I'm wondering why I can't get 15/15 (my solution to another problem wasn't nearly as efficient, but got all of the points). Apparently, the code is run using "Mono 2.10.1, C# 4".
So can anyone think of a better way to optimize this further? The VS profiler says to avoid calling String.Split and Int32.Parse. The calls to Int32.Parse can't be avoided, although I guess I could optimize tokenizing the array.
My current solution:
using System;
using System.Collections.Generic;
using System.Text;
using System.Linq;
namespace KDifference
{
class Solution
{
static void Main(string[] args)
{
char[] space = { ' ' };
string[] NK = Console.ReadLine().Split(space);
int N = Int32.Parse(NK[0]), K = Int32.Parse(NK[1]);
int[] nums = Console.ReadLine().Split(space, N).Select(x => Int32.Parse(x)).OrderBy(x => x).ToArray();
int KHits = 0;
for (int i = nums.Length - 1, j, k; i >= 1; i--)
{
for (j = 0; j < i; j++)
{
k = nums[i] - nums[j];
if (k == K)
{
KHits++;
}
else if (k < K)
{
break;
}
}
}
Console.Write(KHits);
}
}
}
Your algorithm is still O(n^2), even with the sorting and the early-out. And even if you eliminated the O(n^2) bit, the sort is still O(n lg n). You can use an O(n) algorithm to solve this problem. Here's one way to do it:
Suppose the set you have is S1 = { 1, 7, 4, 6, 3 } and the difference is 2.
Construct the set S2 = { 1 + 2, 7 + 2, 4 + 2, 6 + 2, 3 + 2 } = { 3, 9, 6, 8, 5 }.
The answer you seek is the cardinality of the intersection of S1 and S2. The intersection is {6, 3}, which has two elements, so the answer is 2.
You can implement this solution in a single line of code, provided that you have sequence of integers sequence, and integer difference:
int result = sequence.Intersect(from item in sequence select item + difference).Count();
The Intersect method will build an efficient hash table for you that is O(n) to determine the intersection.
Try this (note, untested):
Sort the array
Start two indexes at 0
If difference between the numbers at those two positions is equal to K, increase count, and increase one of the two indexes (if numbers aren't duplicated, increase both)
If difference is larger than K, increase index #1
If difference is less than K, increase index #2, if that would place it outside the array, you're done
Otherwise, go back to 3 and keep going
Basically, try to keep the two indexes apart by K value difference.
You should write up a series of unit-tests for your algorithm, and try to come up with edge cases.
This would allow you to do it in a single pass. Using hash sets is beneficial if there are many values to parse/check. You might also want to use a bloom filter in combination with hash sets to reduce lookups.
Initialize. Let A and B be two empty hash sets. Let c be zero.
Parse loop. Parse the next value v. If there are no more values the algorithm is done and the result is in c.
Back check. If v exists in A then increment c and jump back to 2.
Low match. If v - K > 0 then:
insert v - K into A
if v - K exists in B then increment c (and optionally remove v - K from B).
High match. If v + K < 1e9 then:
insert v + K into A
if v + K exists in B then increment c (and optionally remove v + K from B).
Remember. Insert v into B.
Jump back to 2.
// php solution for this k difference
function getEqualSumSubstring($l,$s) {
$s = str_replace(' ','',$s);
$l = str_replace(' ','',$l);
for($i=0;$i<strlen($s);$i++)
{
$array1[] = $s[$i];
}
for($i=0;$i<strlen($s);$i++)
{
$array2[] = $s[$i] + $l[1];
}
return count(array_intersect($array1,$array2));
}
echo getEqualSumSubstring("5 2","1 3 5 4 2");
Actually that's trivially to solve with a hashmap:
First put each number into a hashmap: dict((x, x) for x in numbers) in "pythony" pseudo code ;)
Now you just iterate through every number in the hashmap and check if number + K is in the hashmap. If yes, increase count by one.
The obvious improvement to the naive solution is to ONLY check for the higher (or lower) bound, otherwise you get the double results and have to divide by 2 afterwards - useless.
This is O(N) for creating the hashmap when reading the values in and O(N) when iterating through, i.e. O(N) and about 8loc in python (and it is correct, I just solved it ;-) )
Following Eric's answer, paste the implementation of Interscet method below, it is O(n):
private static IEnumerable<TSource> IntersectIterator<TSource>(IEnumerable<TSource> first, IEnumerable<TSource> second, IEqualityComparer<TSource> comparer)
{
Set<TSource> set = new Set<TSource>(comparer);
foreach (TSource current in second)
{
set.Add(current);
}
foreach (TSource current2 in first)
{
if (set.Remove(current2))
{
yield return current2;
}
}
yield break;
}

Series calculation

I have some random integers like
99 20 30 1 100 400 5 10
I have to find a sum from any combination of these integers that is closest(equal or more but not less) to a given number like
183
what is the fastest and accurate way of doing this?
If your numbers are small, you can use a simple Dynamic Programming(DP) technique. Don't let this name scare you. The technique is fairly understandable. Basically you break the larger problem into subproblems.
Here we define the problem to be can[number]. If the number can be constructed from the integers in your file, then can[number] is true, otherwise it is false. It is obvious that 0 is constructable by not using any numbers at all, so can[0] is true. Now you try to use every number from the input file. We try to see if the sum j is achievable. If an already achieved sum + current number we try == j, then j is clearly achievable. If you want to keep track of what numbers made a particular sum, use an additional prev array, which stores the last used number to make the sum. See the code below for an implementation of this idea:
int UPPER_BOUND = number1 + number2 + ... + numbern //The largest number you can construct
bool can[UPPER_BOUND + 1]; //can[number] is true if number can be constructed
can[0] = true; //0 is achievable always by not using any number
int prev[UPPER_BOUND + 1]; //prev[number] is the last number used to achieve sum "number"
for (int i = 0; i < N; i++) //Try to use every number(numbers[i]) from the input file
{
for (int j = UPPER_BOUND; j >= 1; j--) //Try to see if j is an achievable sum
{
if (can[j]) continue; //It is already an achieved sum, so go to the next j
if (j - numbers[i] >= 0 && can[j - numbers[i]]) //If an (already achievable sum) + (numbers[i]) == j, then j is obviously achievable
{
can[j] = true;
prev[j] = numbers[i]; //To achieve j we used numbers[i]
}
}
}
int CLOSEST_SUM = -1;
for (int i = SUM; i <= UPPER_BOUND; i++)
if (can[i])
{
//the closest number to SUM(larger than SUM) is i
CLOSEST_SUM = i;
break;
}
int currentSum = CLOSEST_SUM;
do
{
int usedNumber = prev[currentSum];
Console.WriteLine(usedNumber);
currentSum -= usedNumber;
} while (currentSum > 0);
This seems to be a Knapsack-like problem, where the value of your integers would be the "weight" of each item, the "profit" of each item is 1, and you are looking for the least number of items to exactly sum to the maximum allowable weight of the knapsack.
This is a variant of the SUBSET-SUM problem, and is also NP-Hard like SUBSET-SUM.
But if the numbers involved are small, pseudo-polynomial time algorithms exist. Check out:
http://en.wikipedia.org/wiki/Subset_sum_problem
Ok More details.
The following problem:
Given an array of integers, and integers a,b, is there
some subset whose sum lies in the
interval [a,b] is NP-Hard.
This is so because we can solve subset-sum by choosing a=b=0.
Now this problem easily reduces to your problem and so your problem is NP-Hard too.
Now you can use the polynomial time approximation algorithm mentioned in the wiki link above.
Given an array of N integers, a target S and an approximation threshold c,
there is a polynomial time approximation algorithm (involving 1/c) which tells if there is a subset sum in the interval [(1-c)S, S].
You can use this repeatedly (by some form of binary search) to find the best approximation to S you need. Note you can also use this on intervals of the from [S, (1+c)S], while the knapsack will only give you a solution <= S.
Of course there might be better algorithms, in fact I can bet on it. There should be plenty of literature on the web. Some search terms you can use: approximation algorithms for subset-sum, pseudo-polynomial time algorithms, dynamic programming algorithm etc.
A simple-brute-force-method would be to read the text in, parse it into numbers, and then go through all combinations until you find the required sum.
A quicker solution would be to sort the numbers, then...
Add the largest number to your sum, Is it too big? if so, take it off and try the next smallest.
if the sum is too small, add the next largest number and repeat.
Continue adding numbers not letting the sum exceed the target. Finish when you hit the target.
Note that when you backtrack, you may need to back track more than one level. Sounds like a good case for recursion...
If the numbers are large you can turn this into an Integer Programme. Using Mathematicas solver, it might look something like this
nums = {99, 20, 30 , 1, 100, 400, 5, 10};
vars = a /# Range#Length#nums;
Minimize[(vars.nums - 183)^2, vars, Integers]
You can sort the list of values, find the first value that's greater than the target, and start concentrating on the values that are less than the target. Find the sum that's closest to the target without going over, then compare that to the first value greater than the target. If the difference between the closest sum and the target is less than the difference between the first value greater than the target and the target, then you have the sum that's closest.
Kinda hokey, but I think the logic hangs together.

Bubble sort worst case example is O(n*n), how?

I am trying Bubble sort. There are 5 elements and array is unsorted. Worst case for bubble sort shuold be O(n^2).
As an exmaple I am using
A = {5, 4, 3, 2, 1}
In this case the comparison should be 5^2 = 25.
Using manual verification and code, I am getting comparison count to be 20.
Following is the bubble sort implemenation code
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SortingAlgo
{
class Program
{
public static int[] bubbleSort(int[] A)
{
bool sorted = false;
int temp;
int count = 0;
int j = 0;
while (!sorted)
{
j++;
sorted = true;
for (int i = 0; i < (A.Length - 1); i++)
{
count++;
if(A[i] > A[i+1])
{
temp = A[i];
A[i] = A[i+1];
A[i+1] = temp;
sorted = false;
}
Console.Write(count + ". -> ");
for(int k=0; k< A.Length; k++)
{
Console.Write(A[k]);
}
Console.Write("\n");
}
}
return A;
}
static void Main(string[] args)
{
int[] A = {5, 4, 3, 2, 1};
int[] B = bubbleSort(A);
Console.ReadKey();
}
}
}
Output is following
-> 45321
-> 43521
-> 43251
-> 43215
-> 34215
-> 32415
-> 32145
-> 32145
-> 23145
-> 21345
-> 21345
-> 21345
-> 12345
-> 12345
-> 12345
-> 12345
-> 12345
-> 12345
-> 12345
-> 12345
Any idea why the maths its not coming out to be 25?
Big-O notation doesn't tell you anything about how many iterations (or how long) an algorithm will take. It is an indication of the growth rate of a function as the number of elements increases (usually towards infinity).
So, in your case, O(n2) simply means that the bubble sort's computational resources grows by the square as the number of elements. So, if you have twice as many elements, you can expect it to take (worst case) 4-times as long (as an upper bound). If you have 4-times as many elements, the complexity increases by a factor of 16. Etc.
For an algorithm with O(n2) complexity, five elements could take 25 iterations, or 25,000 iterations. There's no way to tell without analyzing the algorithm. In the same vein, a function with O(1) complexity (constant time) could take 0.000001 seconds to execute or two weeks to execute.
If an algorithm takes n^2 - n operations, that's still simplified to O(n^2). Big-O notation is only an approximation of how the algorithm scales, not an exact measurement of how many operations it will need for a specific input.
Consider: Your example, bubble-sorting 5 elements, takes 5x4 = 20 comparisons. That generalizes to bubble-sorting N elements takes N x (N-1) = N^2 - N comparisons, and N^2 very quickly gets a LOT bigger than N. That's where O(N^2) comes from. (For example, for 20 elements, you are looking at 380 comparisons.)
Bubble sort is a specific case, and its full complexity is (n*(n-1)) - which gives you the correct number: 5 elements leads to 5*(5-1) operations, which is 20, and is what you found in the worst case.
The simplified Big O notation, however, removes the constants and the least significantly growing terms, and just gives O(n^2). This makes it easy to compare it to other implementations and algorithms which may not have exactly (n*(n-1)), but when simplified show how the work increases with greater input.
It's much easier to compare the Big O notation, and for large datasets the constants and lesser terms are negligible.
Remember that O(N^2) is simplified from the actual expression of C * N(2); that is, there is a bounded constant. For bubble sort, for example, C would be roughly 1/2 (not exactly, but close).
Your comparison count is off too, I think, it should be 10 pairwise comparisons. But I guess you could consider swapping of elements to be another. Either way, all that does is change the constant, not the more important part.
for (int i=4; i>0; i--) {
for (int j=0; j<i;j++) {
if (A[j]>A[j+1]){
swapValues(A[j],A[j+1]);
................
Comparison count for 5 (0:4) elements should be 10.
i=4 - {(j[0] j[1]) (j[1] j[2]) (j[2] j[3]) (j[3] j[4])} - 4 comparisons
i=3 - {(j[0] j[1]) (j[1] j[2]) (j[2] j[3])} - 3 comparisons
i=2 - {(j[0] j[1]) (j[1] j[2])} - 2 comparisons
i=1 - {(j[0] j[1])} - 1 comparison

Categories