Why is Queue consuming so much memory? - c#

Basically I was doing a code kata on codewars site to kinda of 'warm up' before starting to code, and noticed a problem that I don't know if its because of my code, or just regular thing.
public static string WhoIsNext(string[] names, long n)
{
Queue<string> fifo = new Queue<string>(names);
for(int i = 0; i < n - 1; i++)
{
var name = fifo.Dequeue();
fifo.Enqueue(name);
fifo.Enqueue(name);
}
return fifo.Peek();
}
And Is called like this:
// Test 1
string[] names = { "Sheldon", "Leonard", "Penny", "Rajesh", "Howard" };
long n = 1;
var nth = CodeKata.WhoIsNext(names, n); // n = 1 Should return sheldon.
// test 2
string[] names = { "Sheldon", "Leonard", "Penny", "Rajesh", "Howard" };
long n = 52;
var nth = CodeKata.WhoIsNext(names, n); // n = 52 Should return Penny.
// test 3
string[] names = { "Sheldon", "Leonard", "Penny", "Rajesh", "Howard" };
long n = 7230702951;
var nth = CodeKata.WhoIsNext(names, n); // n = 52 Should return Leonard.
In this code When I put the long n with the value 7230702951 (a really high number...), it throws an out of memory exception. Is the number that high, or is the queue just not optimized for such numbers.
I say this because I tried using a List and the list memory usage stayed under 500 MB (the plateu was around 327MB btw), and this running for about 2/3min, whereas the queue throwed the exception in a matter of seconds, and went over 2GB in just that time alone.
Can someone explain to me the why of this happening, I just curious?
edit 1
I forgot to add the List code:
public static string WhoIsNext(string[] names, long n)
{
List<string> test = new List<string>(names);
for(int i = 0; i < n - 1; i++)
{
var name = test[0];
test.RemoveAt(0);
test.Add(name);
test.Add(name);
}
return test[0];
}
edit 2
For those saying that the code doubles the names and is inneficient, I already know that, the code isn't made to be useful, is just a kata. (I updated the link now!)
My question is as to why is Queue so much more inneficient thatn List with high count numbers.

Part of the reason is that the queue code is way faster than the List code, because queues are optimised for deletes due to the fact that they are a circular buffer. Lists aren't - the list copies the array contents every time you remove that first element.
Change the input value to 72307000 for example. On my machine, the queue finishes that in less than a second. The list is still chugging away minutes (and at this rate, hours) later. In 4 minutes i is now at 752408 - it has done almost 1% of the work).
Thus, I am not sure the queue is less memory efficient. It is just so fast that you run into the memory issue sooner. The list almost certainly has the same issue (the way that List and Queue do array size doubling is very similar) - it will just likely take days to run into it.
To a certain extent, you could predict this even without running your code. A queue with 7230702951 entries in it (running 64-bit) will take a minimum of 8 bytes per entry. So 57845623608 bytes. Which is larger than 50GB. Clearly your machine is going to struggle to fit that in RAM (plus .NET won't let you have an array that large)...
Additionally, your code has a subtle bug. The loop can't ever end (if n is greater than int.MaxValue). Your loop variable is an int but the parameter is a long. Your int will overflow (from int.MaxValue to int.MinValue with i++). So the loop will never exit, for large values of n (meaning the queue will grow forever). You likely should change the type of i to long.

Related

How do you do this in C# without using List?

I am new to C#. The following code was a solution I came up to solve a challenge. I am unsure how to do this without using List since my understanding is that you can't push to an array in C# since they are of fixed size.
Is my understanding of what I said so far correct?
Is there a way to do this that doesn't involve creating a new array every time I need to add to an array? If there is no other way, how would I create a new array when the size of the array is unknown before my loop begins?
Return a sorted array of all non-negative numbers less than the given n which are divisible both by 3 and 4. For n = 30, the output should be
threeAndFour(n) = [0, 12, 24].
int[] threeAndFour(int n) {
List<int> l = new List<int>(){ 0 };
for (int i = 12; i < n; ++i)
if (i % 12 == 0)
l.Add(i);
return l.ToArray();
}
EDIT: I have since refactored this code to be..
int[] threeAndFour(int n) {
List<int> l = new List<int>(){ 0 };
for (int i = 12; i < n; i += 12)
l.Add(i);
return l.ToArray();
}
A. Lists is OK
If you want to use a for to find out the numbers, then List is the appropriate data structure for collecting the numbers as you discover them.
B. Use more maths
static int[] threeAndFour(int n) {
var a = new int[(n / 12) + 1];
for (int i = 12; i < n; i += 12) a[i/12] = i;
return a;
}
C. Generator pattern with IEnumerable<int>
I know that this doesn't return an array, but it does avoid a list.
static IEnumerable<int> threeAndFour(int n) {
yield return 0;
for (int i = 12; i < n; i += 12)
yield return i;
}
D. Twist and turn to avoid a list
The code could for twice. First to figure the size or the array, and then to fill it.
int[] threeAndFour(int n) {
// Version: A list is really undesirable, arrays are great.
int size = 1;
for (int i = 12; i < n; i += 12)
size++;
var a = new int[size];
a[0] = 0;
int counter = 1;
for (int i = 12; i < n; i += 12) a[counter++] = i;
}
if (i % 12 == 0)
So you have figured out that the numbers which divides both 3 and 4 are precisely those numbers that divides 12.
Can you figure out how many such numbers there are below a given n? - Can you do so without counting the numbers - if so there is no need for a dynamically growing container, you can just initialize the container to the correct size.
Once you have your array just keep track of the next index to fill.
You could use Linq and Enumerable.Range method for the purpose. For example,
int[] threeAndFour(int n)
{
return Enumerable.Range(0,n).Where(x=>x%12==0).ToArray();
}
Enumerable.Range generates a sequence of integral numbers within a specified range, which is then filtered on the condition (x%12==0) to retrieve the desired result.
Since you know this goes in steps of 12 and you know how many there are before you start, you can do:
Enumerable.Range(0,n/12+1).Select(x => x*12).ToArray();
I am unsure how to do this without using List since my understanding is that you can't push to an array in C# since they are of fixed size.
It is correct that arrays can not grow. List were invented as a wrapper around a array that automagically grows whenever needed. Note that you can give List a integer via the Constructor, wich will tell it the minimum size it should expect. It will allocate at least that much the first time. This can limit growth related overhead.
And dictionaries are just a variation of the list mechanics, with Hash Table key search speed.
There is only 1 other Collection I know of that can grow. However it is rarely mentioned outside of theory and some very specific cases:
Linked Lists. The linked list has a unbeatable growth performance and the lowest issue of running into OutOfMemory Exceptions due to Fragmentation. Unfortunately, their random access times are the worst as a result. Unless you can process those collections exclusively sequentally from the start (or sometimes the end), their performance will be abysmal. Only stacks and queues are likely to use them. There is however still a implementation you could use in .NET: https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.linkedlist-1
Your code holds some potential too:
for (int i = 12; i < n; ++i)
if (i % 12 == 0)
l.Add(i);
It would way more effective to count up by 12 every itteration - you are only interested in every 12th number after all. You may have to change the loop, but I think a do...while would do. Also the array/minimum List size is easily predicted: Just divide n by 12 and add 1. But I asume that is mostly mock-up code and it is not actually that deterministic.
List generally works pretty well, as I understand your question you have challenged yourself to solve a problem without using the List class. An array (or List) uses a contiguous block of memory to store elements. Arrays are of fixed size. List will dynamically expand to accept new elements but still keeps everything in a single block of memory.
You can use a linked list https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.linkedlist-1?view=netframework-4.8 to produce a simulation of an array. A linked list allocates additional memory for each element (node) that is used to point to the next (and possibly the previous). This allows you to add elements without large block allocations, but you pay a space cost (increased use of memory) for each element added. The other problem with linked lists are you can't quickly access random elements. To get to element 5, you have to go through elements 0 through 4. There's a reason arrays and array like structures are favored for many tasks, but it's always interesting to try to do common things in a different way.

Subset Sum algorithm efficiency

We have a number of payments (Transaction) that come into our business each day. Each Transaction has an ID and an Amount. We have the requirement to match a number of these transactions to a specific amount. Example:
Transaction Amount
1 100
2 200
3 300
4 400
5 500
If we wanted to find the transactions that add up to 600 you would have a number of sets (1,2,3),(2,4),(1,5).
I found an algorithm that I have adapted, that works as defined below. For 30 transactions it takes 15ms. But the number of transactions average around 740 and have a maximum close to 6000. Is the a more efficient way to perform this search?
sum_up(TransactionList, remittanceValue, ref MatchedLists);
private static void sum_up(List<Transaction> transactions, decimal target, ref List<List<Transaction>> matchedLists)
{
sum_up_recursive(transactions, target, new List<Transaction>(), ref matchedLists);
}
private static void sum_up_recursive(List<Transaction> transactions, decimal target, List<Transaction> partial, ref List<List<Transaction>> matchedLists)
{
decimal s = 0;
foreach (Transaction x in partial) s += x.Amount;
if (s == target)
{
matchedLists.Add(partial);
}
if (s > target)
return;
for (int i = 0; i < transactions.Count; i++)
{
List<Transaction> remaining = new List<Transaction>();
Transaction n = new Transaction(0, transactions[i].ID, transactions[i].Amount);
for (int j = i + 1; j < transactions.Count; j++) remaining.Add(transactions[j]);
List<Transaction> partial_rec = new List<Transaction>(partial);
partial_rec.Add(new Transaction(n.MatchNumber, n.ID, n.Amount));
sum_up_recursive(remaining, target, partial_rec, ref matchedLists);
}
}
With Transaction defined as:
class Transaction
{
public int ID;
public decimal Amount;
public int MatchNumber;
public Transaction(int matchNumber, int id, decimal amount)
{
ID = id;
Amount = amount;
MatchNumber = matchNumber;
}
}
As already mentioned your problem can be solved by pseudo polynomial algorithm in O(n*G) with n - number of items and G - your targeted sum.
The first part question: is it possible to achieve the targeted sum G. The following pseudo/python code solves it (have no C# on my machine):
def subsum(values, target):
reached=[False]*(target+1) # initialize as no sums reached at all
reached[0]=True # with 0 elements we can only achieve the sum=0
for val in values:
for s in reversed(xrange(target+1)): #for target, target-1,...,0
if reached[s] and s+val<=target: # if subsum=s can be reached, that we can add the current value to this sum and build an new sum
reached[s+val]=True
return reached[target]
What is the idea? Let's consider values [1,2,3,6] and target sum 7:
We start with an empty set - the possible sum is obviously 0.
Now we look at the first element 1 and have to options to take or not to take. That leaves as with possible sums {0,1}.
Now looking at the next element 2: leads to possible sets {0,1} (not taking)+{2,3} (taking).
Until now not much difference to your approach, but now for element 3 we have possible sets a. for not taking {0,1,2,3} and b. for taking {3,4,5,6} resulting in {0,1,2,3,4,5,6} as possible sums. The difference to your approach is that there are two way to get to 3 and your recursion will be started twice from that (which is not needed). Calculating basically the same staff over and over again is the problem of your approach and why the proposed algorithm is better.
As last step we consider 6 and get {0,1,2,3,4,5,6,7} as possible sums.
But you also need the subset which leads to the targeted sum, for this we just remember which element was taken to achieve the current sub sum. This version returns a subset which results in the target sum or None otherwise:
def subsum(values, target):
reached=[False]*(target+1)
val_ids=[-1]*(target+1)
reached[0]=True # with 0 elements we can only achieve the sum=0
for (val_id,val) in enumerate(values):
for s in reversed(xrange(target+1)): #for target, target-1,...,0
if reached[s] and s+val<=target:
reached[s+val]=True
val_ids[s+val]=val_id
#reconstruct the subset for target:
if not reached[target]:
return None # means not possible
else:
result=[]
current=target
while current!=0:# search backwards jumping from predecessor to predecessor
val_id=val_ids[current]
result.append(val_id)
current-=values[val_id]
return result
As an another approach you could use memoization to speed up your current solution remembering for the state (subsum, number_of_elements_not considered) whether it is possible to achieve the target sum. But I would say the standard dynamic programming is a less error prone possibility here.
Yes.
I can't provide full code at the moment, but instead of iterating each list of transactions twice until finding matches (O squared), try this concept:
setup a hashtable with the existing transaction amounts as entries, as well as the summation of each set of two transactions assuming each value is made of a max of two transactions (weekend credit card processing).
for each total, reference into the hashtable - the sets of transactions in that slot are the list of matching transactions.
Instead of O^2, you can get it down to 4*O, which would make a noticeable difference in speed.
Good luck!
Dynamic programming can solve this problem efficiently:
Assume you have n transactions and the max amount of transactions is m.
we can solve it just in the complexity of O(nm).
learn it at Knapsack problem.
for this problem we can define for pre i transactions the numbers of subset, add up to sum: dp[i][sum].
the equation:
for i 1 to n:
dp[i][sum] = dp[i - 1][sum - amount_i]
the dp[n][sum] is the numbers of you need, and you need to add some tricks to get what are all the subsets.
Blockquote
You have a couple of practical assumptions here that would make brute force with smartish branch pruning feasible:
items are unique, hence you wouldn't be getting combinatorial blow up of valid subsets (i.e. (1,1,1,1,1,1,1,1,1,1,1,1,1) adding up to 3)
if the number of resulting feasible sets is still huge, you would run out of memory collecting them before running into total runtime issues.
ordering input ascending would allow for an easy early stop check - if your remaining sum is smaller then the current element, then none of the yet unexamined items could possibly be in a result (as current and subsequent items would only get bigger)
keeping running sums would speed up each step, as you wouldn't be recalculating it over and over again
Here's a bit of code:
public static List<T[]> SubsetSums<T>(T[] items, int target, Func<T, int> amountGetter)
{
Stack<T> unusedItems = new Stack<T>(items.OrderByDescending(amountGetter));
Stack<T> usedItems = new Stack<T>();
List<T[]> results = new List<T[]>();
SubsetSumsRec(unusedItems, usedItems, target, results, amountGetter);
return results;
}
public static void SubsetSumsRec<T>(Stack<T> unusedItems, Stack<T> usedItems, int targetSum, List<T[]> results, Func<T,int> amountGetter)
{
if (targetSum == 0)
results.Add(usedItems.ToArray());
if (targetSum < 0 || unusedItems.Count == 0)
return;
var item = unusedItems.Pop();
int currentAmount = amountGetter(item);
if (targetSum >= currentAmount)
{
// case 1: use current element
usedItems.Push(item);
SubsetSumsRec(unusedItems, usedItems, targetSum - currentAmount, results, amountGetter);
usedItems.Pop();
// case 2: skip current element
SubsetSumsRec(unusedItems, usedItems, targetSum, results, amountGetter);
}
unusedItems.Push(item);
}
I've run it against 100k input that yields around 1k results in under 25 millis, so it should be able to handle your 740 case with ease.

How to best implement K-nearest neighbours in C# for large number of dimensions?

I'm implementing the K-nearest neighbours classification algorithm in C# for a training and testing set of about 20,000 samples each, and 25 dimensions.
There are only two classes, represented by '0' and '1' in my implementation. For now, I have the following simple implementation :
// testSamples and trainSamples consists of about 20k vectors each with 25 dimensions
// trainClasses contains 0 or 1 signifying the corresponding class for each sample in trainSamples
static int[] TestKnnCase(IList<double[]> trainSamples, IList<double[]> testSamples, IList<int[]> trainClasses, int K)
{
Console.WriteLine("Performing KNN with K = "+K);
var testResults = new int[testSamples.Count()];
var testNumber = testSamples.Count();
var trainNumber = trainSamples.Count();
// Declaring these here so that I don't have to 'new' them over and over again in the main loop,
// just to save some overhead
var distances = new double[trainNumber][];
for (var i = 0; i < trainNumber; i++)
{
distances[i] = new double[2]; // Will store both distance and index in here
}
// Performing KNN ...
for (var tst = 0; tst < testNumber; tst++)
{
// For every test sample, calculate distance from every training sample
Parallel.For(0, trainNumber, trn =>
{
var dist = GetDistance(testSamples[tst], trainSamples[trn]);
// Storing distance as well as index
distances[trn][0] = dist;
distances[trn][1] = trn;
});
// Sort distances and take top K (?What happens in case of multiple points at the same distance?)
var votingDistances = distances.AsParallel().OrderBy(t => t[0]).Take(K);
// Do a 'majority vote' to classify test sample
var yea = 0.0;
var nay = 0.0;
foreach (var voter in votingDistances)
{
if (trainClasses[(int)voter[1]] == 1)
yea++;
else
nay++;
}
if (yea > nay)
testResults[tst] = 1;
else
testResults[tst] = 0;
}
return testResults;
}
// Calculates and returns square of Euclidean distance between two vectors
static double GetDistance(IList<double> sample1, IList<double> sample2)
{
var distance = 0.0;
// assume sample1 and sample2 are valid i.e. same length
for (var i = 0; i < sample1.Count; i++)
{
var temp = sample1[i] - sample2[i];
distance += temp * temp;
}
return distance;
}
This takes quite a bit of time to execute. On my system it takes about 80 seconds to complete. How can I optimize this, while ensuring that it would also scale to larger number of data samples? As you can see, I've tried using PLINQ and parallel for loops, which did help (without these, it was taking about 120 seconds). What else can I do?
I've read about KD-trees being efficient for KNN in general, but every source I read stated that they're not efficient for higher dimensions.
I also found this stackoverflow discussion about this, but it seems like this is 3 years old, and I was hoping that someone would know about better solutions to this problem by now.
I've looked at machine learning libraries in C#, but for various reasons I don't want to call R or C code from my C# program, and some other libraries I saw were no more efficient than the code I've written. Now I'm just trying to figure out how I could write the most optimized code for this myself.
Edited to add - I cannot reduce the number of dimensions using PCA or something. For this particular model, 25 dimensions are required.
Whenever you are attempting to improve the performance of code, the first step is to analyze the current performance to see exactly where it is spending its time. A good profiler is crucial for this. In my previous job I was able to use the dotTrace profiler to good effect; Visual Studio also has a built-in profiler. A good profiler will tell you exactly where you code is spending time method-by-method or even line-by-line.
That being said, a few things come to mind in reading your implementation:
You are parallelizing some inner loops. Could you parallelize the outer loop instead? There is a small but nonzero cost associated to a delegate call (see here or here) which may be hitting you in the "Parallel.For" callback.
Similarly there is a small performance penalty for indexing through an array using its IList interface. You might consider declaring the array arguments to "GetDistance()" explicitly.
How large is K as compared to the size of the training array? You are completely sorting the "distances" array and taking the top K, but if K is much smaller than the array size it might make sense to use a partial sort / selection algorithm, for instance by using a SortedSet and replacing the smallest element when the set size exceeds K.

Parallel execution with StackExchange.Redis?

I have a 1M items store in List<Person> Which I'm serializing in order to insert to Redis. (2.8)
I divide work among 10 Tasks<> where each takes its own section ( List<> is thread safe for readonly ( It is safe to perform multiple read operations on a List)
Simplification :
example:
For ITEMS=100, THREADS=10 , each Task will capture its own PAGE and deal with the relevant range.
For exaple :
void Main()
{
var ITEMS=100;
var THREADS=10;
var PAGE=4;
List<int> lst = Enumerable.Range(0,ITEMS).ToList();
for (int i=0;i< ITEMS/THREADS ;i++)
{
lst[PAGE*(ITEMS/THREADS)+i].Dump();
}
}
PAGE=0 will deal with : 0,1,2,3,4,5,6,7,8,9
PAGE=4 will deal with : 40,41,42,43,44,45,46,47,48,49
All ok.
Now back to SE.redis.
I wanted to implement this pattern and so I did : (with ITEMS=1,000,000)
My testing :
(Here is dbsize checking each second) :
As you can see , 1M records were added via 10 threads.
Now , I don't know if it's fast but , when I change ITEMS from 1M to 10M -- things get really slow and I get exception :
The exception is on the for loop.
Unhandled Exception: System.AggregateException: One or more errors
occurred. ---
System.TimeoutException: Timeout performing SET urn:user>288257, inst: 1, queu e: 11, qu=0, qs=11, qc=0, wr=0/0, in=0/0 at
StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message
messa ge, ResultProcessor1 processor, ServerEndPoint server) in
c:\TeamCity\buildAgen
t\work\58bc9a6df18a3782\StackExchange.Redis\StackExchange\Redis\ConnectionMultip
lexer.cs:line 1722 at
StackExchange.Redis.RedisBase.ExecuteSync[T](Message message,
ResultProces sor1 processor, ServerEndPoint server) in
c:\TeamCity\buildAgent\work\58bc9a6df
18a3782\StackExchange.Redis\StackExchange\Redis\RedisBase.cs:line 79
... ... Press any key to continue . . .
Question:
Is my way of dividing work is the RIGHT way (fastest)
How can I get things faster ( a sample code would be much appreciated)
How can I resolve this exception?
Related info :
<gcAllowVeryLargeObjects enabled="true" /> Is present in App.config ( otherwise i'm getting outOfmemoryException ) , also - build for x64bit, I have 16GB , , ssd drive , i7 cpu).
Currently, your code is using the synchronous API (StringSet), and is being loaded by 10 threads concurrently. This will present no appreciable challenge to SE.Redis - it works just fine here. I suspect that it genuinely is a timeout where the server has taken longer than you would like to process some of the data, most likely also related to the server's allocator. One option, then, is to simply increase the timeout a bit. Not a lot... try 5 seconds instead of the default 1 second. Likely, most of the operations are working very fast anyway.
With regards to speeding it up: one option here is to not wait - i.e. keep pipelining data. If you are content not to check every single message for an error state, then one simple way to do this is to add , flags: CommandFlags.FireAndForget to the end of your StringSet call. In my local testing, this sped up the 1M example by 25% (and I suspect a lot of the rest of the time is actually spent in string serialization).
The biggest problem I had with the 10M example was simply the overhead of working with the 10M example - especially since this takes huge amounts of memory for both the redis-server and the application, which (to emulate your setup) are on the same machine. This creates competing memory pressure, with GC pauses etc in the managed code. But perhaps more importantly: it simply takes forever to start doing anything. Consequently, I refactored the code to use parallel yield return generators rather than a single list. For example:
static IEnumerable<Person> InventPeople(int seed, int count)
{
for(int i = 0; i < count; i++)
{
int f = 1 + seed + i;
var item = new Person
{
Id = f,
Name = Path.GetRandomFileName().Replace(".", "").Substring(0, appRandom.Value.Next(3, 6)) + " " + Path.GetRandomFileName().Replace(".", "").Substring(0, new Random(Guid.NewGuid().GetHashCode()).Next(3, 6)),
Age = f % 90,
Friends = ParallelEnumerable.Range(0, 100).Select(n => appRandom.Value.Next(1, f)).ToArray()
};
yield return item;
}
}
static IEnumerable<T> Batchify<T>(this IEnumerable<T> source, int count)
{
var list = new List<T>(count);
foreach(var item in source)
{
list.Add(item);
if(list.Count == count)
{
foreach (var x in list) yield return x;
list.Clear();
}
}
foreach (var item in list) yield return item;
}
with:
foreach (var element in InventPeople(PER_THREAD * counter1, PER_THREAD).Batchify(1000))
Here, the purpose of Batchify is to ensure that we aren't helping the server too much by taking appreciable time between each operation - the data is invented in batches of 1000 and each batch is made available very quickly.
I was also concerned about JSON performance, so I switched to JIL:
public static string ToJSON<T>(this T obj)
{
return Jil.JSON.Serialize<T>(obj);
}
and then just for fun, I moved the JSON work into the batching (so that the actual processing loops :
foreach (var element in InventPeople(PER_THREAD * counter1, PER_THREAD)
.Select(x => new { x.Id, Json = x.ToJSON() }).Batchify(1000))
This got the times down a bit more, so I can load 10M in 3 minutes 57 seconds, a rate of 42,194 rops. Most of this time is actually local processing inside the application. If I change it so that each thread loads the same item ITEMS / THREADS times, then this changes to 1 minute 48 seconds - a rate of 92,592 rops.
I'm not sure if I've answered anything really, but the short version might be simply "try a longer timeout; consider using fire-and-forget).

How come this algorithm in Ruby runs faster than in Parallel'd C#?

The following ruby code runs in ~15s. It barely uses any CPU/Memory (about 25% of one CPU):
def collatz(num)
num.even? ? num/2 : 3*num + 1
end
start_time = Time.now
max_chain_count = 0
max_starter_num = 0
(1..1000000).each do |i|
count = 0
current = i
current = collatz(current) and count += 1 until (current == 1)
max_chain_count = count and max_starter_num = i if (count > max_chain_count)
end
puts "Max starter num: #{max_starter_num} -> chain of #{max_chain_count} elements. Found in: #{Time.now - start_time}s"
And the following TPL C# puts all my 4 cores to 100% usage and is orders of magnitude slower than the ruby version:
static void Euler14Test()
{
Stopwatch sw = new Stopwatch();
sw.Start();
int max_chain_count = 0;
int max_starter_num = 0;
object locker = new object();
Parallel.For(1, 1000000, i =>
{
int count = 0;
int current = i;
while (current != 1)
{
current = collatz(current);
count++;
}
if (count > max_chain_count)
{
lock (locker)
{
max_chain_count = count;
max_starter_num = i;
}
}
if (i % 1000 == 0)
Console.WriteLine(i);
});
sw.Stop();
Console.WriteLine("Max starter i: {0} -> chain of {1} elements. Found in: {2}s", max_starter_num, max_chain_count, sw.Elapsed.ToString());
}
static int collatz(int num)
{
return num % 2 == 0 ? num / 2 : 3 * num + 1;
}
How come ruby runs faster than C#? I've been told that Ruby is slow. Is that not true when it comes to algorithms?
Perf AFTER correction:
Ruby (Non parallel): 14.62s
C# (Non parallel): 2.22s
C# (With TPL): 0.64s
Actually, the bug is quite subtle, and has nothing to do with threading. The reason that your C# version takes so long is that the intermediate values computed by the collatz method eventually start to overflow the int type, resulting in negative numbers which may then take ages to converge.
This first happens when i is 134,379, for which the 129th term (assuming one-based counting) is 2,482,111,348. This exceeds the maximum value of 2,147,483,647 and therefore gets stored as -1,812,855,948.
To get good performance (and correct results) on the C# version, just change:
int current = i;
…to:
long current = i;
…and:
static int collatz(int num)
…to:
static long collatz(long num)
That will bring down your performance to a respectable 1.5 seconds.
Edit: CodesInChaos raises a very valid point about enabling overflow checking when debugging math-oriented applications. Doing so would have allowed the bug to be immediately identified, since the runtime would throw an OverflowException.
Should be:
Parallel.For(1L, 1000000L, i =>
{
Otherwise, you have integer overfill and start checking negative values. The same collatz method should operate with long values.
I experienced something like that. And I figured out that's because each of your loop iterations need to start other thread and this takes some time, and in this case it's comparable (I think it's more time) than the operations you acctualy do in the loop body.
There is an alternative for that: You can get how many CPU cores you have and than use a parallelism loop with the same number of iterations you have cores, each loop will evaluate part of the acctual loop you want, it's done by making an inner for loop that depends on the parallel loop.
EDIT: EXAMPLE
int start = 1, end = 1000000;
Parallel.For(0, N_CORES, n =>
{
int s = start + (end - start) * n / N_CORES;
int e = n == N_CORES - 1 ? end : start + (end - start) * (n + 1) / N_CORES;
for (int i = s; i < e; i++)
{
// Your code
}
});
You should try this code, I'm pretty sure this will do the job faster.
EDIT: ELUCIDATION
Well, quite a long time since I answered this question, but I faced the problem again and finally understood what's going on.
I've been using AForge implementation of Parallel for loop, and it seems like, it fires a thread for each iteration of the loop, so, that's why if the loop takes relatively a small amount of time to execute, you end up with a inefficient parallelism.
So, as some of you pointed out, System.Threading.Tasks.Parallel methods are based on Tasks, which are kind of a higher level of abstraction of a Thread:
"Behind the scenes, tasks are queued to the ThreadPool, which has been enhanced with algorithms that determine and adjust to the number of threads and that provide load balancing to maximize throughput. This makes tasks relatively lightweight, and you can create many of them to enable fine-grained parallelism."
So yeah, if you use the default library's implementation, you won't need to use this kind of "bogus".

Categories