I want to calculate a rolling maximum and minimum value efficiently. Meaning anything better than recalculating the maximum/minimum from all the values in use every time the window moves.
There was a post on here that asked the same thing and someone posted a solution involving some kind of stack approach that supposedly worked based on its rating. However I can't find it again for the life of me.
Any help would be appreciated in finding a solution or the post. Thank you all!
The algorithm you want to use is called the ascending minima (C++ implementation).
To do this in C#, you will want to get a double ended queue class, and a good one exists on NuGet under the name Nito.Deque.
I have written a quick C# implementation using Nito.Deque, but I have only briefly checked it, and did it from my head so it may be wrong!
public static class AscendingMinima
{
private struct MinimaValue
{
public int RemoveIndex { get; set; }
public double Value { get; set; }
}
public static double[] GetMin(this double[] input, int window)
{
var queue = new Deque<MinimaValue>();
var result = new double[input.Length];
for (int i = 0; i < input.Length; i++)
{
var val = input[i];
// Note: in Nito.Deque, queue[0] is the front
while (queue.Count > 0 && i >= queue[0].RemoveIndex)
queue.RemoveFromFront();
while (queue.Count > 0 && queue[queue.Count - 1].Value >= val)
queue.RemoveFromBack();
queue.AddToBack(new MinimaValue{RemoveIndex = i + window, Value = val });
result[i] = queue[0].Value;
}
return result;
}
}
Here's one way to do it more efficiently. You still have to calculate the value occasionally but, other than certain degenerate data (ever decreasing values), that's minimised in this solution.
We'll limit ourselves to the maximum to simplify things but it's simple to extend to a minimum as well.
All you need is the following:
The window itself, initially empty.
The current maximum (max), initially any value.
The count of the current maximum (maxcount), initially zero.
The idea is to use max and maxcount as a cache for holding the current maximum. Where the cache is valid, you only need to return the value in it, a very fast constant-time operation.
If the cache is invalid when you ask for the maximum, it populates the cache and then returns that value. This is slower than the method in the previous paragraph but subsequent requests for the maximum once the cache is valid again use that faster method.
Here's what you do for maintaining the window and associated data:
Get the next value N.
If the window is full, remove the earliest entry M. If maxcount is greater than 0 and M is equal to max, decrement maxcount. Once maxcount reaches 0, the cache is invalid but we don't need to worry about that until such time the user requests the maximum value (there's no point repopulating the cache until then).
Add N to the rolling window.
If the window size is now 1 (that N is the only current entry), set max to N and maxcount to 1, then go back to step 1.
If maxcount is greater than 0 and N is greater than max, set max to N and maxcount to 1, then go back to step 1.
If maxcount is greater than 0 and N is equal to max, increment maxcount.
Go back to step 1.
Now, at any point while that window management is going on, you may request the maximum value. This is a separate operation, distinct from the window management itself. This can be done using the following rules in sequence.
If the window is empty, there is no maximum: raise an exception or return some sensible sentinel value.
If maxcount is greater than 0, then the cache is valid: simply return max.
Otherwise, the cache needs to be repopulated. Go through the entire list, setting up max and maxcount as per the code snippet below.
set max to window[0], maxcount to 0
for each x in window[]:
if x > max:
set max to x, maxcount to 1
else:
if x == max:
increment maxcount
The fact that you mostly maintain a cache of the maximum value and only recalculate when needed makes this a much more efficient solution than simply recalculating blindly whenever an entry is added.
For some definite statistics, I created the following Python program. It uses a sliding window of size 25 and uses random numbers from 0 to 999 inclusive (you can play with these properties to see how they affect the outcome).
First some initialisation code. Note the stat variables, they'll be used to count cache hits and misses:
import random
window = []
max = 0
maxcount = 0
maxwin = 25
statCache = 0
statNonCache = 0
Then the function to add a number to the window, as per my description above:
def addNum(n):
global window
global max
global maxcount
if len(window) == maxwin:
m = window[0]
window = window[1:]
if maxcount > 0 and m == max:
maxcount = maxcount - 1
window.append(n)
if len(window) == 1:
max = n
maxcount = 1
return
if maxcount > 0 and n > max:
max = n
maxcount = 1
return
if maxcount > 0 and n == max:
maxcount = maxcount + 1
Next, the code which returns the maximum value from the window:
def getMax():
global max
global maxcount
global statCache
global statNonCache
if len(window) == 0:
return None
if maxcount > 0:
statCache = statCache + 1
return max
max = window[0]
maxcount = 0
for val in window:
if val > max:
max = val
maxcount = 1
else:
if val == max:
maxcount = maxcount + 1
statNonCache = statNonCache + 1
return max
And, finally, the test harness:
random.seed()
for i in range(1000000):
val = int(1000 * random.random())
addNum(val)
newmax = getMax()
print("%d cached, %d non-cached"%(statCache,statNonCache))
Note that the test harness attempts to get the maximum for every time you add a number to the window. In practice, this may not be needed. In other words, this is the worst-case scenario for the random data generated.
Running that program a few times for pseudo-statistical purposes, we get (formatted and analysed for reporting purposes):
960579 cached, 39421 non-cached
960373 cached, 39627 non-cached
960395 cached, 39605 non-cached
960348 cached, 39652 non-cached
960441 cached, 39559 non-cached
960602 cached, 39398 non-cached
960561 cached, 39439 non-cached
960463 cached, 39537 non-cached
960409 cached, 39591 non-cached
960798 cached, 39202 non-cached
======= ======
9604969 395031
So you can see that, on average for random data, only about 3.95% of the cases resulted in a calculation hit (cache miss). The vast majority used the cached values. That should be substantially better than having to recalculate the maximum on every insertion into the window.
Some things that will affect that percentage will be:
The window size. Larger sizes means that there's more likelihood of a cache hit, improving the percentage. For example, doubling the window size pretty much halved the cache misses (to 1.95%).
The range of possible values. Less choice here means that there's more likely to be cache hits in the window. For example, reducing the range from 0..999 to 0..9 gave a big improvement in reducing cache misses (0.85%).
I'm assuming that by "window" you mean a range a[start] to a[start + len], and that start moves along. Consider the minimal value, the maximal is similar, and the move to the window a[start + 1] to a[start + len + 1]. Then the minimal value of the window will change only if (a) a[start + len + 1] < min (a smaller value came in), or (b) a[start] == min (one of the smallest values just left; recompute the minimum).
Another, possibly more efficient way of doing this, is to fill a priority queue with the first window, and update with each value entering/leaving, but I don't think that is much better (priority queues aren't suited to "pick out random element from the middle" (what you need to do when advancing the window). And the code will be much more complex. Better stick to the simple solution until proven that the performance isn't acceptable, and that this code is responsible for (much of) the resource consumption.
After writing my own algo yesterday, and asking for improvements, I was referred here. Indeed this algo is more elegant.
I'm not sure it offers constant speed calc regardless of the window size, but regardless, I tested the performance vs my own caching algo (fairly simple, and probably uses the same idea as others have proposed).
the caching is 8-15 times faster (tested with rolling windows of 5,50,300,1000 I don't need more).
below are both alternatives with stopwatches and result validation.
static class Program
{
static Random r = new Random();
static int Window = 50; //(small to facilitate visual functional test). eventually could be 100 1000, but not more than 5000.
const int FullDataSize =1000;
static double[] InputArr = new double[FullDataSize]; //array prefilled with the random input data.
//====================== Caching algo variables
static double Low = 0;
static int LowLocation = 0;
static int CurrentLocation = 0;
static double[] Result1 = new double[FullDataSize]; //contains the caching mimimum result
static int i1; //incrementor, just to store the result back to the array. In real life, the result is not even stored back to array.
//====================== Ascending Minima algo variables
static double[] Result2 = new double[FullDataSize]; //contains ascending miminum result.
static double[] RollWinArray = new double[Window]; //array for the caching algo
static Deque<MinimaValue> RollWinDeque = new Deque<MinimaValue>(); //Niro.Deque nuget.
static int i2; //used by the struct of the Deque (not just for result storage)
//====================================== my initialy proposed caching algo
static void CalcCachingMin(double currentNum)
{
RollWinArray[CurrentLocation] = currentNum;
if (currentNum <= Low)
{
LowLocation = CurrentLocation;
Low = currentNum;
}
else if (CurrentLocation == LowLocation)
ReFindHighest();
CurrentLocation++;
if (CurrentLocation == Window) CurrentLocation = 0; //this is faster
//CurrentLocation = CurrentLocation % Window; //this is slower, still over 10 fold faster than ascending minima
Result1[i1++] = Low;
}
//full iteration run each time lowest is overwritten.
static void ReFindHighest()
{
Low = RollWinArray[0];
LowLocation = 0; //bug fix. missing from initial version.
for (int i = 1; i < Window; i++)
if (RollWinArray[i] < Low)
{
Low = RollWinArray[i];
LowLocation = i;
}
}
//======================================= Ascending Minima algo based on http://stackoverflow.com/a/14823809/2381899
private struct MinimaValue
{
public int RemoveIndex { get; set; }
public double Value { get; set; }
}
public static void CalcAscendingMinima (double newNum)
{ //same algo as the extension method below, but used on external arrays, and fed with 1 data point at a time like in the projected real time app.
while (RollWinDeque.Count > 0 && i2 >= RollWinDeque[0].RemoveIndex)
RollWinDeque.RemoveFromFront();
while (RollWinDeque.Count > 0 && RollWinDeque[RollWinDeque.Count - 1].Value >= newNum)
RollWinDeque.RemoveFromBack();
RollWinDeque.AddToBack(new MinimaValue { RemoveIndex = i2 + Window, Value = newNum });
Result2[i2++] = RollWinDeque[0].Value;
}
public static double[] GetMin(this double[] input, int window)
{ //this is the initial method extesion for ascending mimima
//taken from http://stackoverflow.com/a/14823809/2381899
var queue = new Deque<MinimaValue>();
var result = new double[input.Length];
for (int i = 0; i < input.Length; i++)
{
var val = input[i];
// Note: in Nito.Deque, queue[0] is the front
while (queue.Count > 0 && i >= queue[0].RemoveIndex)
queue.RemoveFromFront();
while (queue.Count > 0 && queue[queue.Count - 1].Value >= val)
queue.RemoveFromBack();
queue.AddToBack(new MinimaValue { RemoveIndex = i + window, Value = val });
result[i] = queue[0].Value;
}
return result;
}
//============================================ Test program.
static void Main(string[] args)
{ //this it the test program.
//it runs several attempts of both algos on the same data.
for (int j = 0; j < 10; j++)
{
Low = 12000;
for (int i = 0; i < Window; i++)
RollWinArray[i] = 10000000;
//Fill the data + functional test - generate 100 numbers and check them in as you go:
InputArr[0] = 12000;
for (int i = 1; i < FullDataSize; i++) //fill the Input array with random data.
//InputArr[i] = r.Next(100) + 11000;//simple data.
InputArr[i] = InputArr[i - 1] + r.NextDouble() - 0.5; //brownian motion data.
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int i = 0; i < FullDataSize; i++) //run the Caching algo.
CalcCachingMin(InputArr[i]);
stopwatch.Stop();
Console.WriteLine("Caching : " + stopwatch.ElapsedTicks + " mS: " + stopwatch.ElapsedMilliseconds);
stopwatch.Reset();
stopwatch.Start();
for (int i = 0; i < FullDataSize; i++) //run the Ascending Minima algo
CalcAscendingMinima(InputArr[i]);
stopwatch.Stop();
Console.WriteLine("AscMimima: " + stopwatch.ElapsedTicks + " mS: " + stopwatch.ElapsedMilliseconds);
stopwatch.Reset();
i1 = 0; i2 = 0; RollWinDeque.Clear();
}
for (int i = 0; i < FullDataSize; i++) //test the results.
if (Result2[i] != Result1[i]) //this is a test that algos are valid. Errors (mismatches) are printed.
Console.WriteLine("Current:" + InputArr[i].ToString("#.00") + "\tLowest of " + Window + "last is " + Result1[i].ToString("#.00") + " " + Result2[i].ToString("#.00") + "\t" + (Result1[i] == Result2[i])); //for validation purposes only.
Console.ReadLine();
}
}
I would suggest maintaining a stack which supports getMin() or getMax().
This can be done with two stacks and costs only constant time.
fyi: https://www.geeksforgeeks.org/design-a-stack-that-supports-getmin-in-o1-time-and-o1-extra-space/
Related
I have an array of integers. Value of each element represents the time taken to process a file. The processing of files consists of merging two files at a time. What is the algorithm to find the minimum time that can be taken for processing all the files. E.g. - {3,5,9,12,14,18}.
The time of processing can be calculated as -
Case 1) -
a) [8],9,12,14,18
b) [17],12,14,18
c) [26],17,18
d) 26,[35]
e) 61
So total time for processing is 61 + 35 + 26 + 17 + 8 = 147
Case 2) -
a) [21],5,9,12,14
b) [17],[21],9,14
c) [21],[17],[23]
d) [40],[21]
e) 61
This time the total time is 61 + 40 + 23 + 17 + 21 = 162
Seems to me that continuously sorting the array and adding the least two elements is the best bet for the minimum as in Case 1. Is my logic right? If not what is the right and easiest way to achieve this with best performance?
Once you have the sorted list, since you are only removing the two minimum items and replacing them with one, it makes more sense to do a sorted insert and place the new item in the correct place instead of re-sorting the entire list. However, this only saves a fractional amount of time - about 1% faster.
My method CostOfMerge doesn't assume the input is a List but if it is, you can remove the conversion ToList step.
public static class IEnumerableExt {
public static int CostOfMerge(this IEnumerable<int> psrc) {
var src = psrc.ToList();
src.Sort();
while (src.Count > 1) {
var sum = src[0]+src[1];
src.RemoveRange(0, 2);
var index = src.BinarySearch(sum);
if (index < 0)
index = ~index;
src.Insert(index, sum);
total += sum;
}
return total;
}
}
As already discussed in other answers, the best strategy will be to always work on the two items with minimal cost for each iteration. So the only remaining question is how to efficiently take the two smallest items each time.
Since you asked for best performance, I shamelessly took the algorithm from NetMage and modified it to speed it up roughly 40% for my test case (thanks and +1 to NetMage).
The idea is to work mostly in place on a single array.
Each iteration increase the starting index by 1 and move the elements within the array to make space for the sum from current iteration.
public static long CostOfMerge2(this IEnumerable<int> psrc)
{
long total = 0;
var src = psrc.ToArray();
Array.Sort(src);
var i = 1;
int length = src.Length;
while (i < length)
{
var sum = src[i - 1] + src[i];
total += sum;
// find insert position for sum
var index = Array.BinarySearch(src, i + 1, length - i - 1, sum);
if (index < 0)
index = ~index;
--index;
// shift items that come before insert position one place to the left
if (i < index)
Array.Copy(src, i + 1, src, i, index - i);
src[index] = sum;
++i;
}
return total;
}
I tested with the following calling code (switching between CostOfMerge and CostOfMerge2), with a few different values for random-seed, count of elements and max value of initial items.
static void Main(string[] args)
{
var r = new Random(10);
var testcase = Enumerable.Range(0, 400000).Select(x => r.Next(1000)).ToList();
var sw = Stopwatch.StartNew();
long resultCost = testcase.CostOfMerge();
sw.Stop();
Console.WriteLine($"Cost of Merge: {resultCost}");
Console.WriteLine($"Time of Merge: {sw.Elapsed}");
Console.ReadLine();
}
Result for shown configuration for NetMage CostOfMerge:
Cost of Merge: 3670570720
Time of Merge: 00:00:15.4472251
My CostOfMerge2:
Cost of Merge: 3670570720
Time of Merge: 00:00:08.7193612
Ofcourse the detailed numbers are hardware dependent and difference might be bigger or smaller depending on a load of stuff.
No, that's the minimum for a polyphase merge: where N is the bandwidth (number of files you can merge simultaneously), then you want to merge the smallest (N-1) files at each step. However, with this more general problem, you want to delay the larger files as long as possible -- you may want an early step or two to merge fewer than (N-1) files, somewhat like having a "bye" in an elimination tourney. You want all the latter steps to involve the full (N-1) files.
For instance, given N=4 and files 1, 6, 7, 8, 14, 22:
Early merge:
[22], 14, 22
[58]
total = 80
Late merge:
[14], 8, 14, 22
[58]
total = 72
Here, you can apply the following logic to get the desired output.
Get first two minimum values from list.
Remove first two minimum values from list.
Append the sum of first two minimum values in list
And continue until the list become of size 1
Return the only element from list. i.e, this will be your minimum time taken to process every item.
You can follow my Java code out there, if you find helpful .. :)
public class MinimumSums {
private static Integer getFirstMinimum(ArrayList<Integer> list) {
Integer min = Integer.MAX_VALUE;
for(int i=0; i<list.size(); i++) {
if(list.get(i) <= min)
min = list.get(i);
}
return min;
}
private static Integer getSecondMinimum(ArrayList<Integer> list, Integer firstItem) {
Integer min = Integer.MAX_VALUE;
for(int i=0; i<list.size(); i++) {
if(list.get(i) <= min && list.get(i)> firstItem)
min = list.get(i);
}
return min;
}
public static void main(String[] args) {
Integer[] processes = {5, 9, 3, 14, 12, 18};
ArrayList<Integer> list = new ArrayList<Integer>();
ArrayList<Integer> temp = new ArrayList<Integer>();
list.addAll(Arrays.asList(processes));
while(list.size()!= 1) {
Integer firstMin = getFirstMinimum(list); // getting first min value
Integer secondMin = getSecondMinimum(list, firstMin); // getting second min
list.remove(firstMin);
list.remove(secondMin);
list.add(firstMin+secondMin);
temp.add(firstMin + secondMin);
}
System.out.println(temp); // prints all the minimum pairs..
System.out.println(list.get(0)); // prints the output
}
}
the code:
using System;
using System.Diagnostics;
namespace ConsoleApp1
{
class Program
{
const int maxResult = 120; //this can change but hardcoded for this code
static int poolPos;
static double[] pool = new double[maxResult * 4];
static int maxPos;
static double[] result = new double[maxResult];
static void Main(string[] args)
{
var sw = Stopwatch.StartNew();
for(int i = 0; i < 100_000; ++i)
Unlock();
Console.WriteLine(sw.ElapsedMilliseconds);
//Console.Read();
}
static void Unlock()
{
int total = maxResult;
//reset array
poolPos = 0;
maxPos = 0;
FindLock(4);
while (total-- > 0)
{
int i = 0;
double maxWeight = pool[0];
int pos = 0;
while (++i < poolPos) //O(n), can it be faster?
if (pool[i] >= maxWeight) //can have duplicate value, find latest max inserted
(maxWeight, pos) = (pool[i], i); //keep track
result[maxPos++] = maxWeight; //store the result
pool[pos] = pool[--poolPos]; //remove from array by swapping it with last item in the array
FindLock();
}
}
//simulate what feed the array
//don't look at this unless something should be done at insert time
static Random rnd = new Random(42);
static void FindLock(int add = -1)
{
if(add == -1)
{
add = rnd.Next(1, 4);
}
for(int i = 0;i<add;++i)
{
pool[poolPos++] = rnd.Next(-500, 500) / 100d;
}
}
}
}
profiling result:
based on the profiling, I'm trying to find a way to speed it up, all the solution that I found online use double stack or double queue so they only use head or tail value of the array, the code above can pick any item in the list that meet the requirement so I don't think I can use stack or queue.
With a "priority queue" or "max heap", the table is partially sorted, and many operations are O(log(N)):
max (or min, but not both)
insert one row
delete one row
Item 1 is known to be greater than items 2 and 3. Item 1 is always the max.
Item 2 is known to be greater than items 4 and 5.
Item 3 is known to be greater than items 6 and 7.
etc. In general:
Item [k] is known to be greater than items [2*k] and [2*k+1].
Inserts and deletes get a little tricky since you want to keep the table compact.
One of many references: https://www.techiedelight.com/introduction-priority-queues-using-binary-heaps/
The structure can be handy if items are coming and going a lot, but the important action is to grab the max value. Accessing the max value is O(1), but deleting it is O(N).
By definition, if you're working with an unordered list, finding an item is always going to be O(1) in the best case, and O(n) in the worst case.
You can use a hash table to get better lookup speeds, as well as insert/delete. However the hash algorithm itself can be just as expensive as iterating through your list, so proceed with caution. Depending on the use-case, a hash table might be the way to go.
I'm playing around with the Parallel.ForEach in a C# console application, but can't seem to get it right. I'm creating an array with random numbers and i have a sequential foreach and a Parallel.ForEach that finds the largest value in the array. With approximately the same code in c++ i started to see a tradeoff to using several threads at 3M values in the array. But the Parallel.ForEach is twice as slow even at 100M values. What am i doing wrong?
class Program
{
static void Main(string[] args)
{
dostuff();
}
static void dostuff() {
Console.WriteLine("How large do you want the array to be?");
int size = int.Parse(Console.ReadLine());
int[] arr = new int[size];
Random rand = new Random();
for (int i = 0; i < size; i++)
{
arr[i] = rand.Next(0, int.MaxValue);
}
var watchSeq = System.Diagnostics.Stopwatch.StartNew();
var largestSeq = FindLargestSequentially(arr);
watchSeq.Stop();
var elapsedSeq = watchSeq.ElapsedMilliseconds;
Console.WriteLine("Finished sequential in: " + elapsedSeq + "ms. Largest = " + largestSeq);
var watchPar = System.Diagnostics.Stopwatch.StartNew();
var largestPar = FindLargestParallel(arr);
watchPar.Stop();
var elapsedPar = watchPar.ElapsedMilliseconds;
Console.WriteLine("Finished parallel in: " + elapsedPar + "ms Largest = " + largestPar);
dostuff();
}
static int FindLargestSequentially(int[] arr) {
int largest = arr[0];
foreach (int i in arr) {
if (largest < i) {
largest = i;
}
}
return largest;
}
static int FindLargestParallel(int[] arr) {
int largest = arr[0];
Parallel.ForEach<int, int>(arr, () => 0, (i, loop, subtotal) =>
{
if (i > subtotal)
subtotal = i;
return subtotal;
},
(finalResult) => {
Console.WriteLine("Thread finished with result: " + finalResult);
if (largest < finalResult) largest = finalResult;
}
);
return largest;
}
}
It's performance ramifications of having a very small delegate body.
We can achieve better performance using the partitioning. In this case the body delegate performs work with a high data volume.
static int FindLargestParallelRange(int[] arr)
{
object locker = new object();
int largest = arr[0];
Parallel.ForEach(Partitioner.Create(0, arr.Length), () => arr[0], (range, loop, subtotal) =>
{
for (int i = range.Item1; i < range.Item2; i++)
if (arr[i] > subtotal)
subtotal = arr[i];
return subtotal;
},
(finalResult) =>
{
lock (locker)
if (largest < finalResult)
largest = finalResult;
});
return largest;
}
Pay attention to synchronize the localFinally delegate. Also note the need for proper initialization of the localInit: () => arr[0] instead of () => 0.
Partitioning with PLINQ:
static int FindLargestPlinqRange(int[] arr)
{
return Partitioner.Create(0, arr.Length)
.AsParallel()
.Select(range =>
{
int largest = arr[0];
for (int i = range.Item1; i < range.Item2; i++)
if (arr[i] > largest)
largest = arr[i];
return largest;
})
.Max();
}
I highly recommend free book Patterns of Parallel Programming by Stephen Toub.
As the other answerers have mentioned, the action you're trying to perform against each item here is so insignificant that there are a variety of other factors which end up carrying more weight than the actual work you're doing. These may include:
JIT optimizations
CPU branch prediction
I/O (outputting thread results while the timer is running)
the cost of invoking delegates
the cost of task management
the system incorrectly guessing what thread strategy will be optimal
memory/cpu caching
memory pressure
environment (debugging)
etc.
Running each approach a single time is not an adequate way to test, because it enables a number of the above factors to weigh more heavily on one iteration than on another. You should start with a more robust benchmarking strategy.
Furthermore, your implementation is actually dangerously incorrect. The documentation specifically says:
The localFinally delegate is invoked once per task to perform a final action on each task’s local state. This delegate might be invoked concurrently on multiple tasks; therefore, you must synchronize access to any shared variables.
You have not synchronized your final delegate, so your function is prone to race conditions that would make it produce incorrect results.
As in most cases, the best approach to this one is to take advantage of work done by people smarter than we are. In my testing, the following approach appears to be the fastest overall:
return arr.AsParallel().Max();
The Parallel Foreach loop should be running slower because the algorithm used is not parallel and a lot more work is being done to run this algorithm.
In the single thread, to find the max value, we can take the first number as our max value and compare it to every other number in the array. If one of the numbers larger than our first number, we swap and continue. This way we access each number in the array once, for a total of N comparisons.
In the Parallel loop above, the algorithm creates overhead because each operation is wrapped inside a function call with a return value. So in addition to doing the comparisons, it is running overhead of adding and removing these calls onto the call stack. In addition, since each call is dependent on the value of the function call before, it needs to run in sequence.
In the Parallel For Loop below, the array is divided into an explicit number of threads determined by the variable threadNumber. This limits the overhead of function calls to a low number.
Note, for low values, the parallel loops performs slower. However, for 100M, there is a decrease in time elapsed.
static int FindLargestParallel(int[] arr)
{
var answers = new ConcurrentBag<int>();
int threadNumber = 4;
int partitionSize = arr.Length/threadNumber;
Parallel.For(0, /* starting number */
threadNumber+1, /* Adding 1 to threadNumber in case array.Length not evenly divisible by threadNumber */
i =>
{
if (i*partitionSize < arr.Length) /* check in case # in array is divisible by # threads */
{
var max = arr[i*partitionSize];
for (var x = i*partitionSize;
x < (i + 1)*partitionSize && x < arr.Length;
++x)
{
if (arr[x] > max)
max = arr[x];
}
answers.Add(max);
}
});
/* note the shortcut in finding max in the bag */
return answers.Max(i=>i);
}
Some thoughts here: In the parallel case, there is thread management logic involved that determines how many threads it wants to use. This thread management logic presumably possibly runs on your main thread. Every time a thread returns with the new maximum value, the management logic kicks in and determines the next work item (the next number to process in your array). I'm pretty sure that this requires some kind of locking. In any case, determining the next item may even cost more than performing the comparison operation itself.
That sounds like a magnitude more work (overhead) to me than a single thread that processes one number after the other. In the single-threaded case there are a number of optimization at play: No boundary checks, CPU can load data into the first level cache within the CPU, etc. Not sure, which of these optimizations apply for the parallel case.
Keep in mind that on a typical desktop machine there are only 2 to 4 physical CPU cores available so you will never have more than that actually doing work. So if the parallel processing overhead is more than 2-4 times of a single-threaded operation, the parallel version will inevitably be slower, which you are observing.
Have you attempted to run this on a 32 core machine? ;-)
A better solution would be determine non-overlapping ranges (start + stop index) covering the entire array and let each parallel task process one range. This way, each parallel task can internally do a tight single-threaded loop and only return once the entire range has been processed. You could probably even determine a near optimal number of ranges based on the number of logical cores of the machine. I haven't tried this but I'm pretty sure you will see an improvement over the single-threaded case.
Try splitting the set into batches and running the batches in parallel, where the number of batches corresponds to your number of CPU cores.
I ran some equations 1K, 10K and 1M times using the following methods:
A "for" loop.
A "Parallel.For" from the System.Threading.Tasks lib, across the entire set.
A "Parallel.For" across 4 batches.
A "Parallel.ForEach" from the System.Threading.Tasks lib, across the entire set.
A "Parallel.ForEach" across 4 batches.
Results: (Measured in seconds)
Conclusion:
Processing batches in parallel using the "Parallel.ForEach" has the best outcome in cases above 10K records. I believe the batching helps because it utilizes all CPU cores (4 in this example), but also minimizes the amount of threading overhead associated with parallelization.
Here is my code:
public void ParallelSpeedTest()
{
var rnd = new Random(56);
int range = 1000000;
int numberOfCores = 4;
int batchSize = range / numberOfCores;
int[] rangeIndexes = Enumerable.Range(0, range).ToArray();
double[] inputs = rangeIndexes.Select(n => rnd.NextDouble()).ToArray();
double[] weights = rangeIndexes.Select(n => rnd.NextDouble()).ToArray();
double[] outputs = new double[rangeIndexes.Length];
/// Series "for"...
var startTimeSeries = DateTime.Now;
for (var i = 0; i < range; i++)
{
outputs[i] = Math.Sqrt(Math.Pow(inputs[i] * weights[i], 2));
}
var durationSeries = DateTime.Now - startTimeSeries;
/// "Parallel.For"...
var startTimeParallel = DateTime.Now;
Parallel.For(0, range, (i) => {
outputs[i] = Math.Sqrt(Math.Pow(inputs[i] * weights[i], 2));
});
var durationParallelFor = DateTime.Now - startTimeParallel;
/// "Parallel.For" in Batches...
var startTimeParallel2 = DateTime.Now;
Parallel.For(0, numberOfCores, (c) => {
var endValue = (c == numberOfCores - 1) ? range : (c + 1) * batchSize;
var startValue = c * batchSize;
for (var i = startValue; i < endValue; i++)
{
outputs[i] = Math.Sqrt(Math.Pow(inputs[i] * weights[i], 2));
}
});
var durationParallelForBatches = DateTime.Now - startTimeParallel2;
/// "Parallel.ForEach"...
var startTimeParallelForEach = DateTime.Now;
Parallel.ForEach(rangeIndexes, (i) => {
outputs[i] = Math.Sqrt(Math.Pow(inputs[i] * weights[i], 2));
});
var durationParallelForEach = DateTime.Now - startTimeParallelForEach;
/// Parallel.ForEach in Batches...
List<Tuple<int,int>> ranges = new List<Tuple<int, int>>();
for (var i = 0; i < numberOfCores; i++)
{
int start = i * batchSize;
int end = (i == numberOfCores - 1) ? range : (i + 1) * batchSize;
ranges.Add(new Tuple<int,int>(start, end));
}
var startTimeParallelBatches = DateTime.Now;
Parallel.ForEach(ranges, (range) => {
for(var i = range.Item1; i < range.Item1; i++) {
outputs[i] = Math.Sqrt(Math.Pow(inputs[i] * weights[i], 2));
}
});
var durationParallelForEachBatches = DateTime.Now - startTimeParallelBatches;
Debug.Print($"=================================================================");
Debug.Print($"Given: Set-size: {range}, number-of-batches: {numberOfCores}, batch-size: {batchSize}");
Debug.Print($".................................................................");
Debug.Print($"Series For: {durationSeries}");
Debug.Print($"Parallel For: {durationParallelFor}");
Debug.Print($"Parallel For Batches: {durationParallelForBatches}");
Debug.Print($"Parallel ForEach: {durationParallelForEach}");
Debug.Print($"Parallel ForEach Batches: {durationParallelForEachBatches}");
Debug.Print($"");
}
Given this simple piece of code and 10mln array of random numbers:
static int Main(string[] args)
{
int size = 10000000;
int num = 10; //increase num to reduce number of buckets
int numOfBuckets = size/num;
int[] ar = new int[size];
Random r = new Random(); //initialize with randum numbers
for (int i = 0; i < size; i++)
ar[i] = r.Next(size);
var s = new Stopwatch();
s.Start();
var group = ar.GroupBy(i => i / num);
var l = group.Count();
s.Stop();
Console.WriteLine(s.ElapsedMilliseconds);
Console.ReadLine();
return 0;
}
I did some performance on grouping, so when the number of buckets is 10k the estimated execution time is 0.7s, for 100k buckets it is 2s, for 1m buckets it is 7.5s.
I wonder why is that. I imagine that if the GroupBy is implemented using HashTable there might be problem with collisions. For example initially the hashtable is prepard to work for let's say 1000 groups and then when the number of groups is growing it needs to increase the size and do the rehashing. If these was the case I could then write my own grouping where I would initialize the HashTable with expected number of buckets, I did that but it was only slightly faster.
So my question is, why number of buckets influences groupBy performance that much?
EDIT:
running under release mode change the results to 0.55s, 1.6s, 6.5s respectively.
I also changed the group.ToArray to piece of code below just to force execution of grouping :
foreach (var g in group)
array[g.Key] = 1;
where array is initialized before timer with appropriate size, the results stayed almost the same.
EDIT2:
You can see the working code from mellamokb in here pastebin.com/tJUYUhGL
I'm pretty certain this is showing the effects of memory locality (various levels of caching) and also object allocation.
To verify this, I took three steps:
Improve the benchmarking to avoid unnecessary parts and to garbage collect between tests
Remove the LINQ part by populating a Dictionary (which is effecively what GroupBy does behind the scenes)
Remove even Dictionary<,> and show the same trend for plain arrays.
In order to show this for arrays, I needed to increase the input size, but it does show the same kind of growth.
Here's a short but complete program which can be used to test both the dictionary and the array side - just flip which line is commented out in the middle:
using System;
using System.Collections.Generic;
using System.Diagnostics;
class Test
{
const int Size = 100000000;
const int Iterations = 3;
static void Main()
{
int[] input = new int[Size];
// Use the same seed for repeatability
var rng = new Random(0);
for (int i = 0; i < Size; i++)
{
input[i] = rng.Next(Size);
}
// Switch to PopulateArray to change which method is tested
Func<int[], int, TimeSpan> test = PopulateDictionary;
for (int buckets = 10; buckets <= Size; buckets *= 10)
{
TimeSpan total = TimeSpan.Zero;
for (int i = 0; i < Iterations; i++)
{
// Switch which line is commented to change the test
// total += PopulateDictionary(input, buckets);
total += PopulateArray(input, buckets);
GC.Collect();
GC.WaitForPendingFinalizers();
}
Console.WriteLine("{0,9}: {1,7}ms", buckets, (long) total.TotalMilliseconds);
}
}
static TimeSpan PopulateDictionary(int[] input, int buckets)
{
int divisor = input.Length / buckets;
var dictionary = new Dictionary<int, int>(buckets);
var stopwatch = Stopwatch.StartNew();
foreach (var item in input)
{
int key = item / divisor;
int count;
dictionary.TryGetValue(key, out count);
count++;
dictionary[key] = count;
}
stopwatch.Stop();
return stopwatch.Elapsed;
}
static TimeSpan PopulateArray(int[] input, int buckets)
{
int[] output = new int[buckets];
int divisor = input.Length / buckets;
var stopwatch = Stopwatch.StartNew();
foreach (var item in input)
{
int key = item / divisor;
output[key]++;
}
stopwatch.Stop();
return stopwatch.Elapsed;
}
}
Results on my machine:
PopulateDictionary:
10: 10500ms
100: 10556ms
1000: 10557ms
10000: 11303ms
100000: 15262ms
1000000: 54037ms
10000000: 64236ms // Why is this slower? See later.
100000000: 56753ms
PopulateArray:
10: 1298ms
100: 1287ms
1000: 1290ms
10000: 1286ms
100000: 1357ms
1000000: 2717ms
10000000: 5940ms
100000000: 7870ms
An earlier version of PopulateDictionary used an Int32Holder class, and created one for each bucket (when the lookup in the dictionary failed). This was faster when there was a small number of buckets (presumably because we were only going through the dictionary lookup path once per iteration instead of twice) but got significantly slower, and ended up running out of memory. This would contribute to fragmented memory access as well, of course. Note that PopulateDictionary specifies the capacity to start with, to avoid effects of data copying within the test.
The aim of using the PopulateArray method is to remove as much framework code as possible, leaving less to the imagination. I haven't yet tried using an array of a custom struct (with various different struct sizes) but that may be something you'd like to try too.
EDIT: I can reproduce the oddity of the slower result for 10000000 than 100000000 at will, regardless of test ordering. I don't understand why yet. It may well be specific to the exact processor and cache I'm using...
--EDIT--
The reason why 10000000 is slower than the 100000000 results has to do with the way hashing works. A few more tests explain this.
First off, let's look at the operations. There's Dictionary.FindEntry, which is used in the [] indexing and in Dictionary.TryGetValue, and there's Dictionary.Insert, which is used in the [] indexing and in Dictionary.Add. If we would just do a FindEntry, the timings would go up as we expect it:
static TimeSpan PopulateDictionary1(int[] input, int buckets)
{
int divisor = input.Length / buckets;
var dictionary = new Dictionary<int, int>(buckets);
var stopwatch = Stopwatch.StartNew();
foreach (var item in input)
{
int key = item / divisor;
int count;
dictionary.TryGetValue(key, out count);
}
stopwatch.Stop();
return stopwatch.Elapsed;
}
This is implementation doesn't have to deal with hash collisions (because there are none), which makes the behavior as we expect it. Once we start dealing with collisions, the timings start to drop. If we have as much buckets as elements, there are obviously less collisions... To be exact, we can figure out exactly how many collisions there are by doing:
static TimeSpan PopulateDictionary(int[] input, int buckets)
{
int divisor = input.Length / buckets;
int c1, c2;
c1 = c2 = 0;
var dictionary = new Dictionary<int, int>(buckets);
var stopwatch = Stopwatch.StartNew();
foreach (var item in input)
{
int key = item / divisor;
int count;
if (!dictionary.TryGetValue(key, out count))
{
dictionary.Add(key, 1);
++c1;
}
else
{
count++;
dictionary[key] = count;
++c2;
}
}
stopwatch.Stop();
Console.WriteLine("{0}:{1}", c1, c2);
return stopwatch.Elapsed;
}
The result is something like this:
10:99999990
10: 4683ms
100:99999900
100: 4946ms
1000:99999000
1000: 4732ms
10000:99990000
10000: 4964ms
100000:99900000
100000: 7033ms
1000000:99000000
1000000: 22038ms
9999538:90000462 <<-
10000000: 26104ms
63196841:36803159 <<-
100000000: 25045ms
Note the value of '36803159'. This answers the question why the last result is faster than the first result: it simply has to do less operations -- and since caching fails anyways, that factor doesn't make a difference anymore.
10k the estimated execution time is 0.7s, for 100k buckets it is 2s, for 1m buckets it is 7.5s.
This is an important pattern to recognize when you profile code. It is one of the standard size vs execution time relationships in software algorithms. Just from seeing the behavior, you can tell a lot about the way the algorithm was implemented. And the other way around of course, from the algorithm you can predict the expected execution time. A relationship that's annotated in the Big Oh notation.
Speediest code you can get is amortized O(1), execution time barely increases when you double the size of the problem. The Dictionary<> class behaves that way, as John demonstrated. The increases in time as the problem set gets large is the "amortized" part. A side-effect of Dictionary having to perform linear O(n) searches in buckets that keep getting bigger.
A very common pattern is O(n). That tells you that there is a single for() loop in the algorithm that iterates over the collection. O(n^2) tells you there are two nested for() loops. O(n^3) has three, etcetera.
What you got is the one in between, O(log n). It is the standard complexity of a divide-and-conquer algorithm. In other words, each pass splits the problem in two, continuing with the smaller set. Very common, you see it back in sorting algorithms. Binary search is the one you find back in your text book. Note how log₂(10) = 3.3, very close to the increment you see in your test. Perf starts to tank a bit for very large sets due to the poor locality of reference, a cpu cache problem that's always associated with O(log n) algoritms.
The one thing that John's answer demonstrates is that his guess cannot be correct, GroupBy() certainly does not use a Dictionary<>. And it is not possible by design, Dictionary<> cannot provide an ordered collection. Where GroupBy() must be ordered, it says so in the MSDN Library:
The IGrouping objects are yielded in an order based on the order of the elements in source that produced the first key of each IGrouping. Elements in a grouping are yielded in the order they appear in source.
Not having to maintain order is what makes Dictionary<> fast. Keeping order always cost O(log n), a binary tree in your text book.
Long story short, if you don't actually care about order, and you surely would not for random numbers, then you don't want to use GroupBy(). You want to use a Dictionary<>.
There are (at least) two influence factors: First, a hash table lookup only takes O(1) if you have a perfect hash function, which does not exist. Thus, you have hash collisions.
I guess more important, though, are caching effects. Modern CPUs have large caches, so for the smaller bucket count, the hash table itself might fit into the cache. As the hash table is frequently accessed, this might have a strong influence on the performance. If there are more buckets, more accesses to the RAM might be neccessary, which are slow compared to a cache hit.
There are a few factors at work here.
Hashes and groupings
The way grouping works is by creating a hash table. Each individual group then supports an 'add' operation, which adds an element to the add list. To put it bluntly, it's like a Dictionary<Key, List<Value>>.
Hash tables are always overallocated. If you add an element to the hash, it checks if there is enough capacity, and if not, recreates the hash table with a larger capacity (To be exact: new capacity = count * 2 with count the number of groups). However, a larger capacity means that the bucket index is no longer correct, which means you have to re-build the entries in the hash table. The Resize() method in Lookup<Key, Value> does this.
The 'groups' themselves work like a List<T>. These too are overallocated, but are easier to reallocate. To be precise: the data is simply copied (with Array.Copy in Array.Resize) and a new element is added. Since there's no re-hashing or calculation involved, this is quite a fast operation.
The initial capacity of a grouping is 7. This means, for 10 elements you need to reallocate 1 time, for 100 elements 4 times, for 1000 elements 8 times, and so on. Because you have to re-hash more elements each time, your code gets a bit slower each time the number of buckets grows.
I think these overallocations are the largest contributors to the small growth in the timings as the number of buckets grow. The easiest way to test this theory is to do no overallocations at all (test 1), and simply put counters in an array. The result can be shown below in the code for FixArrayTest (or if you like FixBucketTest which is closer to how groupings work). As you can see, the timings of # buckets = 10...10000 are the same, which is correct according to this theory.
Cache and random
Caching and random number generators aren't friends.
Our little test also shows that when the number of buckets grows above a certain threshold, memory comes into play. On my computer this is at an array size of roughly 4 MB (4 * number of buckets). Because the data is random, random chunks of RAM will be loaded and unloaded into the cache, which is a slow process. This is also the large jump in the speed. To see this in action, change the random numbers to a sequence (called 'test 2'), and - because the data pages can now be cached - the speed will remain the same overall.
Note that hashes overallocate, so you will hit the mark before you have a million entries in your grouping.
Test code
static void Main(string[] args)
{
int size = 10000000;
int[] ar = new int[size];
//random number init with numbers [0,size-1]
var r = new Random();
for (var i = 0; i < size; i++)
{
ar[i] = r.Next(0, size);
//ar[i] = i; // Test 2 -> uncomment to see the effects of caching more clearly
}
Console.WriteLine("Fixed dictionary:");
for (var numBuckets = 10; numBuckets <= 1000000; numBuckets *= 10)
{
var num = (size / numBuckets);
var timing = 0L;
for (var i = 0; i < 5; i++)
{
timing += FixBucketTest(ar, num);
//timing += FixArrayTest(ar, num); // test 1
}
var avg = ((float)timing) / 5.0f;
Console.WriteLine("Avg Time: " + avg + " ms for " + numBuckets);
}
Console.WriteLine("Fixed array:");
for (var numBuckets = 10; numBuckets <= 1000000; numBuckets *= 10)
{
var num = (size / numBuckets);
var timing = 0L;
for (var i = 0; i < 5; i++)
{
timing += FixArrayTest(ar, num); // test 1
}
var avg = ((float)timing) / 5.0f;
Console.WriteLine("Avg Time: " + avg + " ms for " + numBuckets);
}
}
static long FixBucketTest(int[] ar, int num)
{
// This test shows that timings will not grow for the smaller numbers of buckets if you don't have to re-allocate
System.Diagnostics.Stopwatch s = new Stopwatch();
s.Start();
var grouping = new Dictionary<int, List<int>>(ar.Length / num + 1); // exactly the right size
foreach (var item in ar)
{
int idx = item / num;
List<int> ll;
if (!grouping.TryGetValue(idx, out ll))
{
grouping.Add(idx, ll = new List<int>());
}
//ll.Add(item); //-> this would complete a 'grouper'; however, we don't want the overallocator of List to kick in
}
s.Stop();
return s.ElapsedMilliseconds;
}
// Test with arrays
static long FixArrayTest(int[] ar, int num)
{
System.Diagnostics.Stopwatch s = new Stopwatch();
s.Start();
int[] buf = new int[(ar.Length / num + 1) * 10];
foreach (var item in ar)
{
int code = (item & 0x7FFFFFFF) % buf.Length;
buf[code]++;
}
s.Stop();
return s.ElapsedMilliseconds;
}
When executing bigger calculations, less physical memory is available on the computer, counting the buckets will be slower with less memory, as you expend the buckets, your memory will decrease.
Try something like the following:
int size = 2500000; //10000000 divided by 4
int[] ar = new int[size];
//random number init with numbers [0,size-1]
System.Diagnostics.Stopwatch s = new Stopwatch();
s.Start();
for (int i = 0; i<4; i++)
{
var group = ar.GroupBy(i => i / num);
//the number of expected buckets is size / num.
var l = group.ToArray();
}
s.Stop();
calcuting 4 times with lower numbers.
I need to create a list of numbers from a range (for example from x to y) in a random order so that every order has an equal chance.
I need this for a music player I write in C#, to create play lists in a random order.
Any ideas?
Thanks.
EDIT: I'm not interested in changing the original list, just pick up random indexes from a range in a random order so that every order has an equal chance.
Here's what I've wrriten so far:
public static IEnumerable<int> RandomIndexes(int count)
{
if (count > 0)
{
int[] indexes = new int[count];
int indexesCountMinus1 = count - 1;
for (int i = 0; i < count; i++)
{
indexes[i] = i;
}
Random random = new Random();
while (indexesCountMinus1 > 0)
{
int currIndex = random.Next(0, indexesCountMinus1 + 1);
yield return indexes[currIndex];
indexes[currIndex] = indexes[indexesCountMinus1];
indexesCountMinus1--;
}
yield return indexes[0];
}
}
It's working, but the only problem of this is that I need to allocate an array in the memory in the size of count. I'm looking for something that dose not require memory allocation.
Thanks.
This can actually be tricky if you're not careful (i.e., using a naïve shuffling algorithm). Take a look at the Fisher-Yates/Knuth shuffle algorithm for proper distribution of values.
Once you have the shuffling algorithm, the rest should be easy.
Here's more detail from Jeff Atwood.
Lastly, here's Jon Skeet's implementation and description.
EDIT
I don't believe that there's a solution that satisfies your two conflicting requirements (first, to be random with no repeats and second to not allocate any additional memory). I believe you may be prematurely optimizing your solution as the memory implications should be negligible, unless you're embedded. Or, perhaps I'm just not smart enough to come up with an answer.
With that, here's code that will create an array of evenly distributed random indexes using the Knuth-Fisher-Yates algorithm (with a slight modification). You can cache the resulting array, or perform any number of optimizations depending on the rest of your implementation.
private static int[] BuildShuffledIndexArray( int size ) {
int[] array = new int[size];
Random rand = new Random();
for ( int currentIndex = array.Length - 1; currentIndex > 0; currentIndex-- ) {
int nextIndex = rand.Next( currentIndex + 1 );
Swap( array, currentIndex, nextIndex );
}
return array;
}
private static void Swap( IList<int> array, int firstIndex, int secondIndex ) {
if ( array[firstIndex] == 0 ) {
array[firstIndex] = firstIndex;
}
if ( array[secondIndex] == 0 ) {
array[secondIndex] = secondIndex;
}
int temp = array[secondIndex];
array[secondIndex] = array[firstIndex];
array[firstIndex] = temp;
}
NOTE: You can use ushort instead of int to half the size in memory as long as you don't have more than 65,535 items in your playlist. You could always programmatically switch to int if the size exceeds ushort.MaxValue. If I, personally, added more than 65K items to a playlist, I wouldn't be shocked by increased memory utilization.
Remember, too, that this is a managed language. The VM will always reserve more memory than you are using to limit the number of times it needs to ask the OS for more RAM and to limit fragmentation.
EDIT
Okay, last try: we can look to tweak the performance/memory trade off: You could create your list of integers, then write it to disk. Then just keep a pointer to the offset in the file. Then every time you need a new number, you just have disk I/O to deal with. Perhaps you can find some balance here, and just read N-sized blocks of data into memory where N is some number you're comfortable with.
Seems like a lot of work for a shuffle algorithm, but if you're dead-set on conserving memory, then at least it's an option.
If you use a maximal linear feedback shift register, you will use O(1) of memory and roughly O(1) time. See here for a handy C implementation (two lines! woo-hoo!) and tables of feedback terms to use.
And here is a solution:
public class MaximalLFSR
{
private int GetFeedbackSize(uint v)
{
uint r = 0;
while ((v >>= 1) != 0)
{
r++;
}
if (r < 4)
r = 4;
return (int)r;
}
static uint[] _feedback = new uint[] {
0x9, 0x17, 0x30, 0x44, 0x8e,
0x108, 0x20d, 0x402, 0x829, 0x1013, 0x203d, 0x4001, 0x801f,
0x1002a, 0x2018b, 0x400e3, 0x801e1, 0x10011e, 0x2002cc, 0x400079, 0x80035e,
0x1000160, 0x20001e4, 0x4000203, 0x8000100, 0x10000235, 0x2000027d, 0x4000016f, 0x80000478
};
private uint GetFeedbackTerm(int bits)
{
if (bits < 4 || bits >= 28)
throw new ArgumentOutOfRangeException("bits");
return _feedback[bits];
}
public IEnumerable<int> RandomIndexes(int count)
{
if (count < 0)
throw new ArgumentOutOfRangeException("count");
int bitsForFeedback = GetFeedbackSize((uint)count);
Random r = new Random();
uint i = (uint)(r.Next(1, count - 1));
uint feedback = GetFeedbackTerm(bitsForFeedback);
int valuesReturned = 0;
while (valuesReturned < count)
{
if ((i & 1) != 0)
{
i = (i >> 1) ^ feedback;
}
else {
i = (i >> 1);
}
if (i <= count)
{
valuesReturned++;
yield return (int)(i-1);
}
}
}
}
Now, I selected the feedback terms (badly) at random from the link above. You could also implement a version that had multiple maximal terms and you select one of those at random, but you know what? This is pretty dang good for what you want.
Here is test code:
static void Main(string[] args)
{
while (true)
{
Console.Write("Enter a count: ");
string s = Console.ReadLine();
int count;
if (Int32.TryParse(s, out count))
{
MaximalLFSR lfsr = new MaximalLFSR();
foreach (int i in lfsr.RandomIndexes(count))
{
Console.Write(i + ", ");
}
}
Console.WriteLine("Done.");
}
}
Be aware that maximal LFSR's never generate 0. I've hacked around this by returning the i term - 1. This works well enough. Also, since you want to guarantee uniqueness, I ignore anything out of range - the LFSR only generates sequences up to powers of two, so in high ranges, it will generate wost case 2x-1 too many values. These will get skipped - that will still be faster than FYK.
Personally, for a music player, I wouldn't generate a shuffled list, and then play that, then generate another shuffled list when that runs out, but do something more like:
IEnumerable<Song> GetSongOrder(List<Song> allSongs)
{
var playOrder = new List<Song>();
while (true)
{
// this step assigns an integer weight to each song,
// corresponding to how likely it is to be played next.
// in a better implementation, this would look at the total number of
// songs as well, and provide a smoother ramp up/down.
var weights = allSongs.Select(x => playOrder.LastIndexOf(x) > playOrder.Length - 10 ? 50 : 1);
int position = random.Next(weights.Sum());
foreach (int i in Enumerable.Range(allSongs.Length))
{
position -= weights[i];
if (position < 0)
{
var song = allSongs[i];
playOrder.Add(song);
yield return song;
break;
}
}
// trim playOrder to prevent infinite memory here as well.
if (playOrder.Length > allSongs.Length * 10)
playOrder = playOrder.Skip(allSongs.Length * 8).ToList();
}
}
This would make songs picked in order, as long as they haven't been recently played. This provides "smoother" transitions from the end of one shuffle to the next, because the first song of the next shuffle could be the same song as the last shuffle with 1/(total songs) probability, whereas this algorithm has a lower (and configurable) chance of hearing one of the last x songs again.
Unless you shuffle the original song list (which you said you don't want to do), you are going to have to allocate some additional memory to accomplish what you're after.
If you generate the random permutation of song indices beforehand (as you are doing), you obviously have to allocate some non-trivial amount of memory to store it, either encoded or as a list.
If the user doesn't need to be able to see the list, you could generate the random song order on the fly: After each song, pick another random song from the pool of unplayed songs. You still have to keep track of which songs have already been played, but you can use a bitfield for that. If you have 10000 songs, you just need 10000 bits (1250 bytes), each one representing whether the song has been played yet.
I don't know your exact limitations, but I have to wonder if the memory required to store a playlist is significant compared to the amount required for playing audio.
There are a number of methods of generating permutations without needing to store the state. See this question.
I think you should stick to your current solution (the one in your edit).
To do a re-order with no repetitions & not making your code behave unreliable, you have to track what you have already used / like by keeping unused indexes or indirectly by swapping from the original list.
I suggest to check it in the context of the working application i.e. if its of any significance vs. the memory used by other pieces of the system.
From a logical standpoint, it is possible. Given a list of n songs, there are n! permutations; if you assign each permutation a number from 1 to n! (or 0 to n!-1 :-D) and pick one of those numbers at random, you can then store the number of the permutation that you are currently using, along with the original list and the index of the current song within the permutation.
For example, if you have a list of songs {1, 2, 3}, your permutations are:
0: {1, 2, 3}
1: {1, 3, 2}
2: {2, 1, 3}
3: {2, 3, 1}
4: {3, 1, 2}
5: {3, 2, 1}
So the only data I need to track is the original list ({1, 2, 3}), the current song index (e.g. 1) and the index of the permutation (e.g. 3). Then, if I want to find the next song to play, I know it's third (2, but zero-based) song of permutation 3, e.g. Song 1.
However, this method relies on you having an efficient means of determining the ith song of the jth permutation, which until I've had chance to think (or someone with a stronger mathematical background than I can interject) is equivalent to "then a miracle happens". But the principle is there.
If memory was really a concern after a certain number of records and it's safe to say that if that memory boundary is reached, there's enough items in the list to not matter if there are some repeats, just as long as the same song was not repeated twice, I would use a combination method.
Case 1: If count < max memory constraint, generate the playlist ahead of time and use Knuth shuffle (see Jon Skeet's implementation, mentioned in other answers).
Case 2: If count >= max memory constraint, the song to be played will be determined at run time (I'd do it as soon as the song starts playing so the next song is already generated by the time the current song ends). Save the last [max memory constraint, or some token value] number of songs played, generate a random number (R) between 1 and song count, and if R = one of X last songs played, generate a new R until it is not in the list. Play that song.
Your max memory constraints will always be upheld, although performance can suffer in case 2 if you've played a lot of songs/get repeat random numbers frequently by chance.
you could use a trick we do in sql server to order sets in random like this with the use of guid. the values are always distributed equaly random.
private IEnumerable<int> RandomIndexes(int startIndexInclusive, int endIndexInclusive)
{
if (endIndexInclusive < startIndexInclusive)
throw new Exception("endIndex must be equal or higher than startIndex");
List<int> originalList = new List<int>(endIndexInclusive - startIndexInclusive);
for (int i = startIndexInclusive; i <= endIndexInclusive; i++)
originalList.Add(i);
return from i in originalList
orderby Guid.NewGuid()
select i;
}
You're going to have to allocate some memory, but it doesn't have to be a lot. You can reduce the memory footprint (the degree by which I'm unsure, as I don't know that much about the guts of C#) by using a bool array instead of int. Best case scenario this will only use (count / 8) bytes of memory, which isn't too bad (but I doubt C# actually represents bools as single bits).
public static IEnumerable<int> RandomIndexes(int count) {
Random rand = new Random();
bool[] used = new bool[count];
int i;
for (int counter = 0; counter < count; counter++) {
while (used[i = rand.Next(count)]); //i = some random unused value
used[i] = true;
yield return i;
}
}
Hope that helps!
As many others have said you should implement THEN optimize, and only optimize the parts that need it (which you check on with a profiler). I offer a (hopefully) elegant method of getting the list you need, which doesn't really care so much about performance:
using System;
using System.Collections.Generic;
using System.Linq;
namespace Test
{
class Program
{
static void Main(string[] a)
{
Random random = new Random();
List<int> list1 = new List<int>(); //source list
List<int> list2 = new List<int>();
list2 = random.SequenceWhile((i) =>
{
if (list2.Contains(i))
{
return false;
}
list2.Add(i);
return true;
},
() => list2.Count == list1.Count,
list1.Count).ToList();
}
}
public static class RandomExtensions
{
public static IEnumerable<int> SequenceWhile(
this Random random,
Func<int, bool> shouldSkip,
Func<bool> continuationCondition,
int maxValue)
{
int current = random.Next(maxValue);
while (continuationCondition())
{
if (!shouldSkip(current))
{
yield return current;
}
current = random.Next(maxValue);
}
}
}
}
It is pretty much impossible to do it without allocating extra memory. If you're worried about the amount of extra memory allocated, you could always pick a random subset and shuffle between those. You'll get repeats before every song is played, but with a sufficiently large subset I'll warrant few people will notice.
const int MaxItemsToShuffle = 20;
public static IEnumerable<int> RandomIndexes(int count)
{
Random random = new Random();
int indexCount = Math.Min(count, MaxItemsToShuffle);
int[] indexes = new int[indexCount];
if (count > MaxItemsToShuffle)
{
int cur = 0, subsetCount = MaxItemsToShuffle;
for (int i = 0; i < count; i += 1)
{
if (random.NextDouble() <= ((float)subsetCount / (float)(count - i + 1)))
{
indexes[cur] = i;
cur += 1;
subsetCount -= 1;
}
}
}
else
{
for (int i = 0; i < count; i += 1)
{
indexes[i] = i;
}
}
for (int i = indexCount; i > 0; i -= 1)
{
int curIndex = random.Next(0, i);
yield return indexes[curIndex];
indexes[curIndex] = indexes[i - 1];
}
}