understanding PLINQ bottleneck in tree search - c#

I'm having some strange results with PLINQ that I can't seem to explain. I've been trying to parallelize an Alpha Beta tree search to speed up the search process, but it is effectively slowing it down. I'd expect as I raise the degree of parallelism, I'd linearly increase nodes per second... and take a hit with additional nodes processed as pruning is pushed off until later. While the node count matches expectation, my times don't:
non-plinq,
nodes visited: 61418,
runtime: 0:00.67
degree of parallelism: 1,
nodes visited: 61418,
runtime: 0:01.48
degree of parallelism: 2,
nodes visited: 75504,
runtime: 0:10.08
degree of parallelism: 4,
nodes visited: 95664,
runtime: 1:51.98
degree of parallelism: 8,
nodes visited: 108148,
runtime: 1:48.94
Anyone help me with identifying the likely culprits?
relevant code:
public int AlphaBeta(IPosition position, AlphaBetaCutoff parent, int depthleft)
{
if (parent.Cutoff)
return parent.Beta;
if (depthleft == 0)
return Quiesce(position, parent);
var moves = position.Mover.GetMoves().ToList();
if (!moves.Any(m => true))
return position.Scorer.Score();
//Young Brothers Wait Concept...
var first = ProcessScore(moves.First(), parent, depthleft);
if(first >= parent.Beta)
{
parent.Cutoff = true;
return parent.BestScore;
}
//Now parallelize the rest...
if (moves.Skip(1)
.AsParallel()
.WithDegreeOfParallelism(1)
.WithMergeOptions(ParallelMergeOptions.NotBuffered)
.Select(m => ProcessScore(m, parent, depthleft))
.Any(score => parent.BestScore >= parent.Beta))
{
parent.Cutoff = true;
return parent.BestScore;
}
return parent.BestScore;
}
private int ProcessScore(IMove move, AlphaBetaCutoff parent, int depthleft)
{
var child = ABFactory.Create(parent);
if (parent.Cutoff)
{
return parent.BestScore;
}
var score = -AlphaBeta(move.MakeMove(), child, depthleft - 1);
parent.Alpha = score;
parent.BestScore = score;
if (score >= parent.Beta)
{
parent.Cutoff = true;
}
return score;
}
And then the data structure for sharing Alpha Beta parameters across levels of the tree...
public class AlphaBetaCutoff
{
public AlphaBetaCutoff Parent { get; set; }
private bool _cutoff;
public bool Cutoff
{
get
{
return _cutoff || (Parent != null && Parent.Cutoff);
}
set
{
_cutoff = value;
}
}
private readonly object _alphaLock = new object();
private int _alpha = -10000;
public int Alpha
{
get
{
if (Parent == null) return _alpha;
return Math.Max(-Parent.Beta, _alpha);
}
set
{
lock (_alphaLock)
{
_alpha = Math.Max(_alpha, value);
}
}
}
private int _beta = 10000;
public int Beta
{
get
{
if (Parent == null) return _beta;
return -Parent.Alpha;
}
set
{
_beta = value;
}
}
private readonly object _bestScoreLock = new object();
private int _bestScore = -10000;
public int BestScore
{
get
{
return _bestScore;
}
set
{
lock (_bestScoreLock)
{
_bestScore = Math.Max(_bestScore, value);
}
}
}
}

When you do only very little work and setoff new threads for all underlying nodes you create a huge overhead on threading. You are probably processing more nodes because of the Any, normally the processing whould stop, but some nodes have started processing before Any (the first match) was found. Parallelism will better work when you have a known set of large underlying workloads. You can try what happens if you only do parallelism at your top level node(s).

Related

BInary Search Tree iterator c# without parent property

I recently started making a Binary Search Tree in C# in order to practice. After completing this task I decided to implement ICollection<T> so that my tree can be used with a foreachand just in general in order for me to practice.
I've constructed my classes in such a way that I have a Node<T> class and a BinarySearchTree<T> class that contains a Node<T> a Count integer and a IsReadOnly boolean. This is my Node Class:
internal class Node<T>: INode<T> where T: IComparable<T>
{
public Node<T> RightChildNode { get; set; }
public Node<T> LeftChildNode { get; set; }
public T Key { get; set; }
//some methods go here
}
and this is my BST class:
public class BinarySearchTree<T>: ICollection<T> where T: IComparable<T>
{
internal Node<T> Root { get; set; }
public int Count { get; private set; }
public bool IsReadOnly => false;
//some methods go here
}
Now in order to implement ICollection<T> i obviously need an enumerator which I have (partly) implemented as such:
internal class BinarySearchTreeEnumerator<T> : IEnumerator<T> where T: IComparable<T>
{
private BinarySearchTree<T> _parentTree;
private BinarySearchTree<T> _currentTree;
private Node<T> _currentNode => _currentTree.Root;
private T _currentKey;
public T Current => _currentNode.Key;
/// <summary>
/// generic constructor
/// </summary>
/// <param name="tree"></param>
public BinarySearchTreeEnumerator(BinarySearchTree<T> tree)
{
this._parentTree = tree;
}
object IEnumerator.Current => Current;
void IDisposable.Dispose(){}
//pls
public bool MoveNext()
{
if (_currentTree is null)
{
_currentTree = _parentTree;
}
var leftSubtree = this._currentTree.GetLeftSubtree();
var rightSubtree = this._currentTree.GetRightSubtree();
if (!(leftSubtree is null))
{
this._currentTree = leftSubtree;
}
else if (!(rightSubtree is null))
{
this._currentTree = rightSubtree;
}
else
{
}
}
public void Reset()
{
_currentTree = _parentTree;
}
}
now my issue is quite obviously with the MoveNext() method. It doesn't work now because what it does is it just goes down the tree on the leftmost possible path and then gets stuck when it gets to the end of that path. I know I can fix this problem by adding a Parent property to my Node<T> class and then whenever I reach the end of a path in my tree I can just go one Node up and check if there's a different path... However this would mean completely changing my original class and I would prefer not to do that.
Is this just unavoidable? Is there any way to solve this issue without changing my Node<T> class in such a way?
Edit: I Made a thing but its not working :/
public bool MoveNext()
{
if (_currentNode is null)
{
this._currentNode = _parentTree.Root;
this._nodeStack.Push(_currentNode);
return true;
}
var leftNode = this._currentNode.LeftChildNode;
var rightNode = this._currentNode.RightChildNode;
if (!(leftNode is null))
{
this._currentNode = leftNode;
this._nodeStack.Push(_currentNode);
return true;
}
else if (!(rightNode is null))
{
this._currentNode = rightNode;
this._nodeStack.Push(_currentNode);
return true;
}
else
{
//current node does not have children
var parent = this._nodeStack.Pop();
do
{
if (parent is null)
{
return false;
}
} while (!(parent.RightChildNode is null));
this._currentNode = parent.RightChildNode;
this._nodeStack.Push(_currentNode);
return true;
}
}
It might be easier to use recursion to implement this; for example:
Recursive version (for balanced trees only)
public IEnumerator<T> GetEnumerator()
{
return enumerate(Root).GetEnumerator();
}
IEnumerable<T> enumerate(Node<T> root)
{
if (root == null)
yield break;
yield return root.Key;
foreach (var value in enumerate(root.LeftChildNode))
yield return value;
foreach (var value in enumerate(root.RightChildNode))
yield return value;
}
These are members of BinarySearchTree<T>.
Given the above implementation, then the following code:
BinarySearchTree<double> tree = new BinarySearchTree<double>();
tree.Root = new Node<double> {Key = 1.1};
tree.Root.LeftChildNode = new Node<double> {Key = 2.1};
tree.Root.RightChildNode = new Node<double> {Key = 2.2};
tree.Root.LeftChildNode.LeftChildNode = new Node<double> { Key = 3.1 };
tree.Root.LeftChildNode.RightChildNode = new Node<double> { Key = 3.2 };
tree.Root.RightChildNode.LeftChildNode = new Node<double> { Key = 3.3 };
tree.Root.RightChildNode.RightChildNode = new Node<double> { Key = 3.4 };
foreach (var value in tree)
{
Console.WriteLine(value);
}
produces this output:
1.1
2.1
3.1
3.2
2.2
3.3
3.4
WARNING: Stack space is limited to 1MB for a 32-bit process and 4MB for a 64-bit process, so using recursion is likely to run out of stack space if the tree is degenerate (badly unbalanced).
Non-recursive version
You can implement the non-recursive version fairly simply, like so:
IEnumerable<T> enumerate(Node<T> root)
{
var stack = new Stack<Node<T>>();
stack.Push(root);
while (stack.Count > 0)
{
var node = stack.Pop();
if (node == null)
continue;
yield return node.Key;
stack.Push(node.RightChildNode);
stack.Push(node.LeftChildNode);
}
}
This returns the elements in the same order as the recursive version.
Since this version will not run out of stack space even for a degenerate tree, it is preferable to the recursive version.
If you add to your enumerator a
private List<Node<T>> _currentParentNodes = new List<Node<T>>();
and use it like a stack, each time you go down a level you push the current node to currentParentNodes, each time you have to go up you pop the parent node from currentParentNodes, then all your problems will pop away :-)
Do you need a depth-first-search(DFS) approach? It has a recursive nature which is hard to save as a state (it uses the call stack).
Maybe consider the broad-first-search (BFS) approach, which is iterative. You'd only need a currentNode and a Queue.
The rule would be
current = Queue.poll();
For child in current
Queue.offer(child);
At init you would do
Queue.offer(rootNode);
I apologize for my poor formatting and syntax, mobile user here.
Andres

C# How to pool the objects of a node tree efficiently?

I have a node class that contains only value type properties, and one reference type: it's parent node. When performing tree searches, these nodes are created and destroyed hundreds of thousands of times in a very short time span.
public class Node
{
public Node Parent { get; set; }
public int A { get; set; }
public int B { get; set; }
public int C { get; set; }
public int D { get; set; }
}
The tree search looks something like this:
public static Node GetDepthFirstBest(this ITree tree, Node root)
{
Node bestNode = root;
float bestScore = tree.Evaluate(root);
var stack = new Stack<Node>();
stack.Push(root);
while(stack.Count > 0)
{
var current = stack.Pop();
float score = tree.Evaluate(current);
if (score > bestScore)
{
bestNode = current;
bestScore = score;
}
var children = tree.GetChildren(current);
foreach(var c in children) { stack.Push(c); }
}
return bestNode;
}
Because this is done in a Mono runtime that has a very old GC, I wanted to try and pool the node objects. However, I am at a loss on how to know when a node object is safe to return to the pool, since other nodes that are still in use might reference it as a parent. At the end of the search, the best node is returned and a list of nodes is formed by walking back through its ancestors. I have full control over how the nodes are created inside the tree, if that's useful.
What options could I try and implement?
So, fortunately, if you're doing a Depth-First-Search, which you appear to be, this is a bit easier. Any time you reach a leaf node, there are two possibilities: that leaf node is part of the current deepest tree, or it's not.
If it's not, that means it's safe to return this node to the pool. If it is, that means we can return any nodes in our old tree back to our pool that are not in our own ancestor chain.
Now, if we're not a leafnode, we don't know if we can be freed until after we've finished checking our children. then, once all our children are checked, we find out if any of our children said they were the current best. if so, we keep ourselves
this does mean we're doing quite a bit more checking.
Here's some sudo code:
List bestNodes;
bool evalNode(node, score)
{
if (childCount == 0)
{
if (score > bestScore)
{
bestScore = score;
bestNode = node;
bestNodes.Add(node);
return true;
}
else
{
freeNode(this);
return false;
}
}
else
{
bool inLongest = false;
foreach (child in children)
{
inLongest = evalNode(child, score + 1) || inLongest;
}
if (!inLongest)
{
freeNode(node);
}
else
{
free(bestNodes[score]);
bestNodes[score] = node;
}
return inLongest;
}
}
Try using the ref keyword if your node is a struct, this avoids copying the node every time you pass it through to a function.
Thus:
struct Node
{
object obj;
Node children;
}
public void DoStuffWithNode(ref Node pNode){...Logic...}

How to check CONTAINS with multiple values

I am trying to find all the zones that contain 2 or more zone members where the search term is a string value. Here is the code I have. In the FindCommmonZones method when I try to cast the result of an Intersect to an ObservableCollection I get a run-time on an invalid cast. The question is, is there a better way to do this? The string array that is the paramter for FindCommonZones() can be any count of strings. StackOverflow had some other similar posts but none really answered my question - it looked like they all pertained more to SQL.
Some code:
public class Zone
{
public List<ZoneMember> MembersList = new List<ZoneMember>();
private string _ZoneName;
public string zoneName{ get{return _ZoneName;} set{_ZoneName=value;} }
public Zone ContainsMember(string member)
{
var contained = this.MembersList.FirstOrDefault(m => m.MemberWWPN.
Contains(member) || m.MemberAlias.Contains(member));
if (contained != null) { return this; }
else { return null; }
}
}
public class ZoneMember
// a zone member is a member of a zone
// zones have ports, WWPNs, aliases or all 3
{
private string _Alias = string.Empty;
public string MemberAlias {get{return _Alias;} set{_Alias = value; } }
private FCPort _Port = null;
public FCPort MemberPort { get { return _Port; } set { _Port = value; } }
private string _WWPN = string.Empty;
public string MemberWWPN { get { return _WWPN; } set { _WWPN = value; } }
private bool _IsLoggedIn;
public bool IsLoggedIn { get { return _IsLoggedIn; } set { _IsLoggedIn = value; } }
private string _FCID;
public string FCID {get{return _FCID;} set{ _FCID=value; } }
}
private ObservableCollection<ZoneResult> FindCommonZones(string[] searchterms)
{
ObservableCollection<ZoneResult> tempcollection =
new ObservableCollection<ZoneResult>();
//find the zones for the first search term
tempcollection = this.FindZones(searchterms[0]);
//now search for the rest of the search terms and compare
//them to existing result
for (int i = 1; i < searchterms.Count(); i++ )
{
// this line gives an exception trying to cast
tempcollection = (ObservableCollection<ZoneResult>)tempcollection.
Intersect(this.FindZones(searchterms[i]));
}
return tempcollection;
}
private ObservableCollection<ZoneResult> FindZones(string searchterm)
// we need to track the vsan where the zone member is found
// so use a foreach to keep track
{
ObservableCollection<ZoneResult> zonecollection = new ObservableCollection<ZoneResult>();
foreach (KeyValuePair<int, Dictionary<int, CiscoVSAN>> fabricpair in this.FabricDictionary)
{
foreach (KeyValuePair<int, CiscoVSAN> vsanpair in fabricpair.Value)
{
var selection = vsanpair.Value.ActiveZoneset.
ZoneList.Select(z => z.ContainsMember(searchterm)).
Where(m => m != null).OrderBy(z => z.zoneName);
if (selection.Count() > 0)
{
foreach (Zone zone in selection)
{
foreach (ZoneMember zm in zone.MembersList)
{
ZoneResult zr = new ZoneResult(zone.zoneName,
zm.MemberWWPN, zm.MemberAlias, vsanpair.Key.ToString());
zonecollection.Add(zr);
}
}
}
}
}
return zonecollection;
}
Intersect is actually Enumerable.Intersect and is returning an IEnumerable<ZoneResult>. This is not castable to an ObservableCollection because it isn't one - it is the enumeration of the intersecting elements in both collections.
You can, however create a new ObservableCollection from the enumeration:
tempcollection = new ObservableCollection<ZoneResult>(tempcollection
.Intersect(this.FindZones(searchterms[i]));
Depending on how many elements you have, how ZoneResult.Equals is implemented, and how many search terms you expect, this implementation may or may not be feasable (FindZones does seem a little overly-complicated with O(n^4) at first glance). If it seems to be a resource hog or bottleneck, it's time to optimize; otherwise I would just leave it alone if it works.
One suggested optimization could be the following (incorporating #Keith's suggestion to change ContainsMember to a bool) - although it is untested, I probably have my SelectManys wrong, and it really largely amounts to the same thing, you hopefully get the idea:
private ObservableCollection<ZoneResult> FindCommonZones(string[] searchterms)
{
var query = this.FabricDictionary.SelectMany(fabricpair =>
fabricpair.Value.SelectMany(vsanpair =>
vsanpair.Value.ActiveZoneSet.ZoneList
.Where(z=>searchterms.Any(term=>z.ContainsMember(term)))
.SelectMany(zone =>
zone.MembersList.Select(zm=>new ZoneResult(zone.zoneName, zm.MemberWWPN, zm.MemberAlias, vsanpair.Key.ToString()))
)
)
.Distinct()
.OrderBy(zr=>zr.zoneName);
return new ObservableCollection<ZoneResult>(query);
}

traveling salesman problem, 2-opt algorithm c# implementation

Can someone give me a code sample of 2-opt algorithm for traveling salesman problem. For now im using nearest neighbour to find the path but this method is far from perfect, and after some research i found 2-opt algorithm that would correct that path to the acceptable level. I found some sample apps but without source code.
So I got bored and wrote it. It looks like it works, but I haven't tested it very thoroughly. It assumes triangle inequality, all edges exist, that sort of thing. It works largely like the answer I outlined. It prints each iteration; the last one is the 2-optimized one.
I'm sure it can be improved in a zillion ways.
using System;
using System.Collections.Generic;
using System.Linq;
namespace TSP
{
internal static class Program
{
private static void Main(string[] args)
{
//create an initial tour out of nearest neighbors
var stops = Enumerable.Range(1, 10)
.Select(i => new Stop(new City(i)))
.NearestNeighbors()
.ToList();
//create next pointers between them
stops.Connect(true);
//wrap in a tour object
Tour startingTour = new Tour(stops);
//the actual algorithm
while (true)
{
Console.WriteLine(startingTour);
var newTour = startingTour.GenerateMutations()
.MinBy(tour => tour.Cost());
if (newTour.Cost() < startingTour.Cost()) startingTour = newTour;
else break;
}
Console.ReadLine();
}
private class City
{
private static Random rand = new Random();
public City(int cityName)
{
X = rand.NextDouble() * 100;
Y = rand.NextDouble() * 100;
CityName = cityName;
}
public double X { get; private set; }
public double Y { get; private set; }
public int CityName { get; private set; }
}
private class Stop
{
public Stop(City city)
{
City = city;
}
public Stop Next { get; set; }
public City City { get; set; }
public Stop Clone()
{
return new Stop(City);
}
public static double Distance(Stop first, Stop other)
{
return Math.Sqrt(
Math.Pow(first.City.X - other.City.X, 2) +
Math.Pow(first.City.Y - other.City.Y, 2));
}
//list of nodes, including this one, that we can get to
public IEnumerable<Stop> CanGetTo()
{
var current = this;
while (true)
{
yield return current;
current = current.Next;
if (current == this) break;
}
}
public override bool Equals(object obj)
{
return City == ((Stop)obj).City;
}
public override int GetHashCode()
{
return City.GetHashCode();
}
public override string ToString()
{
return City.CityName.ToString();
}
}
private class Tour
{
public Tour(IEnumerable<Stop> stops)
{
Anchor = stops.First();
}
//the set of tours we can make with 2-opt out of this one
public IEnumerable<Tour> GenerateMutations()
{
for (Stop stop = Anchor; stop.Next != Anchor; stop = stop.Next)
{
//skip the next one, since you can't swap with that
Stop current = stop.Next.Next;
while (current != Anchor)
{
yield return CloneWithSwap(stop.City, current.City);
current = current.Next;
}
}
}
public Stop Anchor { get; set; }
public Tour CloneWithSwap(City firstCity, City secondCity)
{
Stop firstFrom = null, secondFrom = null;
var stops = UnconnectedClones();
stops.Connect(true);
foreach (Stop stop in stops)
{
if (stop.City == firstCity) firstFrom = stop;
if (stop.City == secondCity) secondFrom = stop;
}
//the swap part
var firstTo = firstFrom.Next;
var secondTo = secondFrom.Next;
//reverse all of the links between the swaps
firstTo.CanGetTo()
.TakeWhile(stop => stop != secondTo)
.Reverse()
.Connect(false);
firstTo.Next = secondTo;
firstFrom.Next = secondFrom;
var tour = new Tour(stops);
return tour;
}
public IList<Stop> UnconnectedClones()
{
return Cycle().Select(stop => stop.Clone()).ToList();
}
public double Cost()
{
return Cycle().Aggregate(
0.0,
(sum, stop) =>
sum + Stop.Distance(stop, stop.Next));
}
private IEnumerable<Stop> Cycle()
{
return Anchor.CanGetTo();
}
public override string ToString()
{
string path = String.Join(
"->",
Cycle().Select(stop => stop.ToString()).ToArray());
return String.Format("Cost: {0}, Path:{1}", Cost(), path);
}
}
//take an ordered list of nodes and set their next properties
private static void Connect(this IEnumerable<Stop> stops, bool loop)
{
Stop prev = null, first = null;
foreach (var stop in stops)
{
if (first == null) first = stop;
if (prev != null) prev.Next = stop;
prev = stop;
}
if (loop)
{
prev.Next = first;
}
}
//T with the smallest func(T)
private static T MinBy<T, TComparable>(
this IEnumerable<T> xs,
Func<T, TComparable> func)
where TComparable : IComparable<TComparable>
{
return xs.DefaultIfEmpty().Aggregate(
(maxSoFar, elem) =>
func(elem).CompareTo(func(maxSoFar)) > 0 ? maxSoFar : elem);
}
//return an ordered nearest neighbor set
private static IEnumerable<Stop> NearestNeighbors(this IEnumerable<Stop> stops)
{
var stopsLeft = stops.ToList();
for (var stop = stopsLeft.First();
stop != null;
stop = stopsLeft.MinBy(s => Stop.Distance(stop, s)))
{
stopsLeft.Remove(stop);
yield return stop;
}
}
}
}
Well, your solution to TSP is always going to be far from perfect. No code, but here's how to go about 2-Opt. It's not too bad:
You need a class called Stop that has a Next, Prev, and City property, and probably a Stops property that just returns the array containing Next and Prev.
When you link them together, we'll call that a Tour. Tour has a Stop property (any of the stops will do), and an AllStops property, whose getter just walks the stops and returns them
You need a method that takes a tour and returns its cost. Let's call that Tour.Cost().
You need Tour.Clone(), which just walks the stops and clones them individually
You need a method that generates the set of tours with two edges switched. Call this Tour.PossibleMutations()
Start with your NN solution
Call PossibleMutations() on it
Call Cost() on all of them and take the one with the lowest result
Repeat until the cost doesn't go down
If the problem is euclidian distance and you want the cost of the solution produced by the algorithm is within 3/2 of the optimum then you want the Christofides algorithm. ACO and GA don't have a guaranteed cost.

What is the fastest collection in c# to implement a prioritizing queue?

I need to implement a FIFO queue for messages on a game server so it needs to as fast as possible. There will be a queue for each user.
The queue will have a maxiumem size (lets say 2000). The size won't change during runtime.
I need to prioritize messages ONLY if the queue reaches its maximum size by working backwards and removing a lower priority message (if one exists) before adding the new message.
A priority is an int with possible values of 1, 3, 5, 7, 10.
There can be multiple messages with the same priority.
A message cannot change its priority once allocated.
The application is asynchronous so access to the queue needs to be locked.
I'm currently implementing it using a LinkedList as the underlying storage but have concerns that searching and removing nodes will keep it locked for too long.
Heres the basic code I have at the moment:
public class ActionQueue
{
private LinkedList<ClientAction> _actions = new LinkedList<ClientAction>();
private int _maxSize;
/// <summary>
/// Initializes a new instance of the ActionQueue class.
/// </summary>
public ActionQueue(int maxSize)
{
_maxSize = maxSize;
}
public int Count
{
get { return _actions.Count; }
}
public void Enqueue(ClientAction action)
{
lock (_actions)
{
if (Count < _maxSize)
_actions.AddLast(action);
else
{
LinkedListNode<ClientAction> node = _actions.Last;
while (node != null)
{
if (node.Value.Priority < action.Priority)
{
_actions.Remove(node);
_actions.AddLast(action);
break;
}
node = node.Previous;
}
}
}
}
public ClientAction Dequeue()
{
ClientAction action = null;
lock (_actions)
{
action = _actions.First.Value;
_actions.RemoveFirst();
}
return action;
}
}
A vetted implementation of priority queue for C#/.NET can be found in the C5 Generic Collection Library in the C5.IntervalHeap<T> class.
So we have the following properties:
Priorities are well-defined and bounded
Needs to be thread-safe
Queue size is fixed to 2000 messages, where enqueues beyond this drop the lowest item
Its very easy to write a priority queue which supports all of these properties:
public class BoundedPriorityQueue<T>
{
private object locker;
private int maxSize;
private int count;
private LinkedList<T>[] Buckets;
public BoundedPriorityQueue(int buckets, int maxSize)
{
this.locker = new object();
this.maxSize = maxSize;
this.count = 0;
this.Buckets = new LinkedList<T>[buckets];
for (int i = 0; i < Buckets.Length; i++)
{
this.Buckets[i] = new LinkedList<T>();
}
}
public bool TryUnsafeEnqueue(T item, int priority)
{
if (priority < 0 || priority >= Buckets.Length)
throw new IndexOutOfRangeException("priority");
Buckets[priority].AddLast(item);
count++;
if (count > maxSize)
{
UnsafeDiscardLowestItem();
Debug.Assert(count <= maxSize, "Collection Count should be less than or equal to MaxSize");
}
return true; // always succeeds
}
public bool TryUnsafeDequeue(out T res)
{
LinkedList<T> bucket = Buckets.FirstOrDefault(x => x.Count > 0);
if (bucket != null)
{
res = bucket.First.Value;
bucket.RemoveFirst();
count--;
return true; // found item, succeeds
}
res = default(T);
return false; // didn't find an item, fail
}
private void UnsafeDiscardLowestItem()
{
LinkedList<T> bucket = Buckets.Reverse().FirstOrDefault(x => x.Count > 0);
if (bucket != null)
{
bucket.RemoveLast();
count--;
}
}
public bool TryEnqueue(T item, int priority)
{
lock (locker)
{
return TryUnsafeEnqueue(item, priority);
}
}
public bool TryDequeue(out T res)
{
lock (locker)
{
return TryUnsafeDequeue(out res);
}
}
public int Count
{
get { lock (locker) { return count; } }
}
public int MaxSize
{
get { return maxSize; }
}
public object SyncRoot
{
get { return locker; }
}
}
Supports Enqueue/Dequeue in O(1) time, the TryEnqueue and TryDequeue methods are guaranteed to be thread-safe, and the size of the collection will never exceed the max size you specify in the constructor.
The locks on TryEnqueue and TryDequeue are pretty fine-grained, so you might take a performance hit whenever you need to bulk-load or unload data. If you need to load the queue with a lot of data up front, then lock on the SyncRoot and call the unsafe methods as needed.
If you have a fixed number of priorities, I'd just create a composite Queue class that wraps two or more private Queues.
A drastically simplified example follows although you could expand on it by adding a Priority enum and a switch to determine where to Enqueue an item.
class PriorityQueue {
private readonly Queue normalQueue = new Queue();
private readonly Queue urgentQueue = new Queue();
public object Dequeue() {
if (urgentQueue.Count > 0) { return urgentQueue.Dequeue(); }
if (normalQueue.Count > 0) { return normalQueue.Dequeue(); }
return null;
}
public void Enqueue(object item, bool urgent) {
if (urgent) { urgentQueue.Enqueue(item); }
else { normalQueue.Enqueue(item); }
}
}
I'm assuming you can have duplicate priorities.
There's no container within .NET that allows duplicate keys akin to a C++ multimap. You could do this in a few different ways, either with a SortedList that had an array of values for each priority key (and grab the first item out of that array as the return value); the SortedList is a balanced tree underneath (IIRC) and should give you good insert and retrieval performance.
It really depends on the distribution of queue lengths you are likely to see. 2000 is the max but what's the average and what does the distribution look like? If N is typically small, a simple List<> with a brute force search for next lowest may be a fine choice.
Have you profiled your application to know this is a bottleneck?
"Never underestimate the power of a smart compiler
and a smart CPU with registers and on-chip memory
to run dumb algorithms really fast"

Categories