Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Many of the test cases are timing out. I've made sure I'm using lazy evaluation everywhere, linear (or better) routines, etc. I'm shocked that this is still not meeting the performance benchmarks.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
class Mine
{
public int Distance { get; set; } // from river
public int Gold { get; set; } // in tons
}
class Solution
{
static void Main(String[] args)
{
// helper function for reading lines
Func<string, int[]> LineToIntArray = (line) => Array.ConvertAll(line.Split(' '), Int32.Parse);
int[] line1 = LineToIntArray(Console.ReadLine());
int N = line1[0], // # of mines
K = line1[1]; // # of pickup locations
// Populate mine info
List<Mine> mines = new List<Mine>();
for(int i = 0; i < N; ++i)
{
int[] line = LineToIntArray(Console.ReadLine());
mines.Add(new Mine() { Distance = line[0], Gold = line[1] });
}
// helper function for cost of a move
Func<Mine, Mine, int> MoveCost = (mine1, mine2) =>
Math.Abs(mine1.Distance - mine2.Distance) * mine1.Gold;
int sum = 0; // running total of move costs
// all move combinations on the current list of mines,
// given by the indicies of the mines
var indices = Enumerable.Range(0, N);
var moves = from i1 in indices
from i2 in indices
where i1 != i2
select new int[] { i1, i2 };
while(N != K) // while number of mines hasn't been consolidated to K
{
// get move with the least cost
var cheapest = moves.Aggregate(
(prev, cur) => MoveCost(mines[prev[0]],mines[prev[1]])
< MoveCost(mines[cur[0]], mines[cur[1]])
? prev : cur
);
int i = cheapest[0], // index of source mine of cheapest move
j = cheapest[1]; // index of destination mine of cheapest move
// add cost to total
sum += MoveCost(mines[i], mines[j]);
// move gold from source to destination
mines[j].Gold += mines[i].Gold;
// remove from moves any that had the i-th mine as a destination or source
moves = from move in moves
where move[0] == i || move[1] == i
select move;
// update size number of mines after consolidation
--N;
}
Console.WriteLine(sum);
}
}
Lazy evaluation will not make bad algorithms perform better. It will just delay when those performance problems will affect you. What lazy evaluation can help with is space complexity, i.e. reducing the amount of memory you need to execute your algorithm. Since the data is generated lazily, you will not (necessarily) have all the data in the memory at the same time.
However, relying on lazy evaluation to fix your space (or time) complexity problems can easily shoot you in the foot. Look the following example code:
var moves = Enumerable.Range(0, 5).Select(x => {
Console.WriteLine("Generating");
return x;
});
var aggregate = moves.Aggregate((p, c) => {
Console.WriteLine("Aggregating");
return p + c;
});
var newMoves = moves.Where(x => {
Console.WriteLine("Filtering");
return x % 2 == 0;
});
newMoves.ToList();
As you can see, both the aggregate and the newMoves rely on the lazily evaluated moves enumerable. Since the original count of moves is 5, we will see 4 “Aggregating” lines in the output, and 5 “Filtering” lines. But how often do you expect “Generating” to appear in the console?
The answer is 10. This is because moves is a generator and is being evaluated lazily. When multiple places request it, an iterator will be created for each, which ultimately means that the generator will execute multiple times (to generate independent results).
This is not necessarily a problem, but in your case, it very quickly becomes one. Assume that we continue above example code with another round of aggregating. That second aggregate will consume newMoves which in turns will consume the original moves. So to aggregate, we will re-run the original moves generator, and the newMoves generator. And if we were to add another level of filtering, the next round of aggregating would run three interlocked generators, again rerunning the original moves generator.
Since your original moves generator creates an enumerable of quadratic size, and has an actual time complexity of O(n²), this is an actual problem. With each iteration, you add another round of filtering which will be linear to the size of the moves enumerable, and you actually consume the enumerable completely for the aggregation. So you end up with O(n^2 + n^3 + n^4 + n^5 + …) which will eventually be the sum of n^j for j starting at 2 up to N-K. That is a very bad time complexity, and all just because you were trying to save memory by evaluating the moves lazily.
So the first step to make this better is to avoid lazy evaluation. You are constantly iterating moves and filtering it, so you should have it in memory. Yes, that means that you have an array of quadratic size, but you won’t actually need more than that. This also limits the time complexity you have. Yes, you still need to filter the list in linear time (so O(n²) since the size is n²) and you do that inside a loop, so you will end up with cubic time (O(n³)) but that would already be your upper limit (iterating the list a constant amount of times within the loop will only increase the time complexity by a constant, and that doesn’t matter).
Once you have done that, you should consider your original problem, think about what you are actually doing. I believe you could probably reduce the computational complexity further if you use the information you have better, and use data structures (e.g. hash sets, or some graph where the move cost is already stored within) that aid you in the filtering and aggregation. I can’t give you exact ideas since I don’t know your original problem, but I’m sure there is something you can do.
Finally, if you have performance problems, always remember to profile your code. The profiler will tell you what parts of your code is the most expensive, so you can get a clear idea what you should try to optimize and what you don’t need to focus on when optimizing (since optimizing already fast parts will not help you get any faster).
Related
How can I speed up my code? And what am I doing wrong? I have a two-dimensional array of cells that store data about what is in them. And I have a map of only 100x100 and with 10 colonists and this already causes freezes. Although my game is quite raw.
And when should I build a route for a colonist? Every step he takes? Because if an unexpectedly built wall appears on his way. Then he will have to immediately change the route.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class PathFinding : MonoBehaviour
{
public MainWorld MainWorld;
public MainWorld.Cell CurrentWorldCell;
public MainWorld.Cell NeighborWorldCell;
public List<MainWorld.Cell> UnvisitedWorldCells;
public List<MainWorld.Cell> VisitedWorldCells;
public Dictionary<MainWorld.Cell, MainWorld.Cell> PathTraversed;
public List<MainWorld.Cell> PathToObject;
public List<MainWorld.Cell> FindPath(MainWorld.Cell StartWorldCell, string ObjectID)
{
CurrentWorldCell = new MainWorld.Cell();
NeighborWorldCell = new MainWorld.Cell();
UnvisitedWorldCells = new List<MainWorld.Cell>();
VisitedWorldCells = new List<MainWorld.Cell>();
PathTraversed = new Dictionary<MainWorld.Cell, MainWorld.Cell>();
UnvisitedWorldCells.Add(StartWorldCell);
while (UnvisitedWorldCells.Count > 0)
{
CurrentWorldCell = UnvisitedWorldCells[0];
NeighborWorldCell = MainWorld.Data[CurrentWorldCell.Position.x, CurrentWorldCell.Position.y + 1];
CheckWorldCell(CurrentWorldCell, NeighborWorldCell, ObjectID);
NeighborWorldCell = MainWorld.Data[CurrentWorldCell.Position.x + 1, CurrentWorldCell.Position.y];
CheckWorldCell(CurrentWorldCell, NeighborWorldCell, ObjectID);
NeighborWorldCell = MainWorld.Data[CurrentWorldCell.Position.x, CurrentWorldCell.Position.y - 1];
CheckWorldCell(CurrentWorldCell, NeighborWorldCell, ObjectID);
NeighborWorldCell = MainWorld.Data[CurrentWorldCell.Position.x - 1, CurrentWorldCell.Position.y];
CheckWorldCell(CurrentWorldCell, NeighborWorldCell, ObjectID);
UnvisitedWorldCells.Remove(CurrentWorldCell);
if (CurrentWorldCell.ObjectID == ObjectID)
{
return CreatePath(StartWorldCell, CurrentWorldCell);
}
}
return null;
}
public void CheckWorldCell(MainWorld.Cell CurrentWorldCell, MainWorld.Cell NeighborWorldCell, string ObjectID)
{
if (VisitedWorldCells.Contains(NeighborWorldCell) == false)
{
if (NeighborWorldCell.IsPassable == true ||
NeighborWorldCell.ObjectID == ObjectID)
{
UnvisitedWorldCells.Add(NeighborWorldCell);
VisitedWorldCells.Add(NeighborWorldCell);
PathTraversed.Add(NeighborWorldCell, CurrentWorldCell);
}
else
{
VisitedWorldCells.Add(NeighborWorldCell);
}
}
}
public List<MainWorld.Cell> CreatePath(MainWorld.Cell StartWorldCell, MainWorld.Cell EndWorldCell)
{
PathToObject = new List<MainWorld.Cell>();
PathToObject.Add(EndWorldCell);
while (PathToObject[PathToObject.Count - 1] != StartWorldCell)
{
PathToObject.Add(PathTraversed[PathToObject[PathToObject.Count - 1]]);
}
PathToObject.Reverse();
return PathToObject;
}
}
On each iteration of your pathfinding loop, there is a check for each of the four neighboring cells. Within each check, you are searching the VisitedWorldCells List. If there are many cells in that list, it could be your first issue. Next, you either add an element to one or three lists. Manipulating lists is slow. To create a path, you are making a list and adding a bunch of elements. Doing all of this 10 times per frame is unreasonable.
Use preallocated 2d boolean arrays. You know how large the map is.
bool[,] array = new bool[mapSizeX, mapSizeY];
Using this, you can check the cells directly with something like
if(!VistedWorldCells[14,16])
Instead of checking the entire VisitedWorldCells list (which also could be a slow comparison depending on what the MainWorld.Cell type is).
Preallocated arrays and directly checking/setting the values will speed this up by orders of magnitude.
As far as I can tell your algorithm is a simple breadth first search. The first step should be to replace the UnvisitedWorldCells list (often called the 'open set') with a queue, and the VisitedWorldCells (often called closed set) with a hashSet. Or just use a 2D array for a constant time lookup. You might also consider using a more compact representation of your node, like a simple pair of x/y coordinates.
The next improvement step would be Djikstras shortest path algorithm. And there are plenty of examples how this works if you search around a bit. This works in a similar way, but uses a priority queue (usually a min heap) for the unvisited cells, this allows for different 'cost' to be specified.
The next step should probably be A*, this is an 'optimal' algorithm for pathfinding, but if you only have 100x100 nodes I would not expect a huge difference. This uses a heuristic to guess what nodes to visit first, so the priority queue does not just use the cost to traverse to the node, but also the estimated remaining cost.
As always when it comes to performance you should measure and profile your code to find bottlenecks.
When to pathfind would be a more complicated issue. A fairly simple and probably effective solution would be to create a path once, and just follow it. If a node has been blocked you can redo the path finding then. You could also redo pathfinding every n seconds in case a better path has appeared. If you have multiple units you might introduce additional requirements like "pushing" blocking units out of the way, trying to keep groups of units in a compact group as they navigate tight passages etc. You can probably spend years taking every possible feature in consideration.
I'm implementing the K-nearest neighbours classification algorithm in C# for a training and testing set of about 20,000 samples each, and 25 dimensions.
There are only two classes, represented by '0' and '1' in my implementation. For now, I have the following simple implementation :
// testSamples and trainSamples consists of about 20k vectors each with 25 dimensions
// trainClasses contains 0 or 1 signifying the corresponding class for each sample in trainSamples
static int[] TestKnnCase(IList<double[]> trainSamples, IList<double[]> testSamples, IList<int[]> trainClasses, int K)
{
Console.WriteLine("Performing KNN with K = "+K);
var testResults = new int[testSamples.Count()];
var testNumber = testSamples.Count();
var trainNumber = trainSamples.Count();
// Declaring these here so that I don't have to 'new' them over and over again in the main loop,
// just to save some overhead
var distances = new double[trainNumber][];
for (var i = 0; i < trainNumber; i++)
{
distances[i] = new double[2]; // Will store both distance and index in here
}
// Performing KNN ...
for (var tst = 0; tst < testNumber; tst++)
{
// For every test sample, calculate distance from every training sample
Parallel.For(0, trainNumber, trn =>
{
var dist = GetDistance(testSamples[tst], trainSamples[trn]);
// Storing distance as well as index
distances[trn][0] = dist;
distances[trn][1] = trn;
});
// Sort distances and take top K (?What happens in case of multiple points at the same distance?)
var votingDistances = distances.AsParallel().OrderBy(t => t[0]).Take(K);
// Do a 'majority vote' to classify test sample
var yea = 0.0;
var nay = 0.0;
foreach (var voter in votingDistances)
{
if (trainClasses[(int)voter[1]] == 1)
yea++;
else
nay++;
}
if (yea > nay)
testResults[tst] = 1;
else
testResults[tst] = 0;
}
return testResults;
}
// Calculates and returns square of Euclidean distance between two vectors
static double GetDistance(IList<double> sample1, IList<double> sample2)
{
var distance = 0.0;
// assume sample1 and sample2 are valid i.e. same length
for (var i = 0; i < sample1.Count; i++)
{
var temp = sample1[i] - sample2[i];
distance += temp * temp;
}
return distance;
}
This takes quite a bit of time to execute. On my system it takes about 80 seconds to complete. How can I optimize this, while ensuring that it would also scale to larger number of data samples? As you can see, I've tried using PLINQ and parallel for loops, which did help (without these, it was taking about 120 seconds). What else can I do?
I've read about KD-trees being efficient for KNN in general, but every source I read stated that they're not efficient for higher dimensions.
I also found this stackoverflow discussion about this, but it seems like this is 3 years old, and I was hoping that someone would know about better solutions to this problem by now.
I've looked at machine learning libraries in C#, but for various reasons I don't want to call R or C code from my C# program, and some other libraries I saw were no more efficient than the code I've written. Now I'm just trying to figure out how I could write the most optimized code for this myself.
Edited to add - I cannot reduce the number of dimensions using PCA or something. For this particular model, 25 dimensions are required.
Whenever you are attempting to improve the performance of code, the first step is to analyze the current performance to see exactly where it is spending its time. A good profiler is crucial for this. In my previous job I was able to use the dotTrace profiler to good effect; Visual Studio also has a built-in profiler. A good profiler will tell you exactly where you code is spending time method-by-method or even line-by-line.
That being said, a few things come to mind in reading your implementation:
You are parallelizing some inner loops. Could you parallelize the outer loop instead? There is a small but nonzero cost associated to a delegate call (see here or here) which may be hitting you in the "Parallel.For" callback.
Similarly there is a small performance penalty for indexing through an array using its IList interface. You might consider declaring the array arguments to "GetDistance()" explicitly.
How large is K as compared to the size of the training array? You are completely sorting the "distances" array and taking the top K, but if K is much smaller than the array size it might make sense to use a partial sort / selection algorithm, for instance by using a SortedSet and replacing the smallest element when the set size exceeds K.
I'm using Breadth First Search to solve a rush hour game. It works fine, but it takes really long on difficult boards. I am using a taboo list to avoid states I already discovered, to avoid insane memory usage and improve the run time.
I think this taboo list is the main cause of the long run time. It does drastically improve the time compared to normal BFS, but it's still too slow. Currently I'm using a normal list (C#'s List and the List.Contains method). I'm sure there are better options.
I'm storing my boards as a list of Cars + a width, height and target point (where your car should end up). a Car is stored as 2 points (top-left and bottom-right) that completely describe the car (since they can only be placed horizontally or vertically).
Some things I can think of:
A trie
Something with hash codes
Huge Dictionaries (?)
What is a good/the best data structure for my problem? Thanks for the help.
Edit 1:
Pseudocode (X is the taboo list type):
void Solve(Board b)
Queue q = {b};
X taboo = {b};
while (q not empty)
Board next = q.Dequeue();
foreach (Board succ in next.Successors)
if (succ.IsSolved)
PrintSolution();
return;
if (!taboo.Contains(succ))
q.Enqueue(succ);
taboo.Add(succ);
WriteLine("No solution found");
Edit 2:
The solution was using a HashSet. (see below)
Found the answer (or at least AN answer) thanks to other people's comments. I used C#'s HashSet datastructure with the following hash function for boards:
public override int GetHashCode()
{
int hash = 0;
int mul = 1;
foreach (Car c in Cars.Values)
{
hash += (c.P1.X + c.P1.Y * W) * mul;
mul += W * H;
}
return hash;
}
This seems to work fine and give unique hash codes for every board (correct me if I'm wrong), assuming cars are always stored in the same order and P1 represents the car's top-left point.
With this solution, I can now solve Rush Hour boards that require 50 moves in less than 0.5s, with reasonable amounts of memory usage.
This one is inefficient but it works for me, since my RushHour overall is pretty fast.
public string HashCode()
{
StringBuilder str = new StringBuilder();
foreach (Car car in this.Positions)
{
//#yolo
str.Append(string.Format("#{0}({1},{2})#", car.Original, car.Vector.X, car.Vector.Y));
}
return str.ToString();
}
Say I have a sorted list of 1000 or so unique decimals, arranged by value.
List<decimal> decList
How can I get a random x number of decimals from a list of unique decimals that total up to y?
private List<decimal> getWinningValues(int xNumberToGet, decimal yTotalValue)
{
}
Is there any way to avoid a long processing time on this? My idea so far is to take xNumberToGet random numbers from the pool. Something like (cool way to get random selection from a list)
foreach (decimal d in decList.OrderBy(x => randomInstance.Next())Take(xNumberToGet))
{
}
Then I might check the total of those, and if total is less, i might shift the numbers up (to the next available number) slowly. If the total is more, I might shift the numbers down. I'm still now sure how to implement or if there is a better design readily available. Any help would be much appreciated.
Ok, start with a little extension I got from this answer,
public static IEnumerable<IEnumerable<T>> Combinations<T>(
this IEnumerable<T> source,
int k)
{
if (k == 0)
{
return new[] { Enumerable.Empty<T>() };
}
return source.SelectMany((e, i) =>
source.Skip(i + 1).Combinations(k - 1)
.Select(c => (new[] { e }).Concat(c)));
}
this gives you a pretty efficient method to yield all the combinations with k members, without repetition, from a given IEnumerable. You could make good use of this in your implementation.
Bear in mind, if the IEnumerable and k are sufficiently large this could take some time, i.e. much longer than you have. So, I've modified your function to take a CancellationToken.
private static IEnumerable<decimal> GetWinningValues(
IEnumerable<decimal> allValues,
int numberToGet,
decimal targetValue,
CancellationToken canceller)
{
IList<decimal> currentBest = null;
var currentBestGap = decimal.MaxValue;
var locker = new object();
allValues.Combinations(numberToGet)
.AsParallel()
.WithCancellation(canceller)
.TakeWhile(c => currentBestGap != decimal.Zero)
.ForAll(c =>
{
var gap = Math.Abs(c.Sum() - targetValue);
if (gap < currentBestGap)
{
lock (locker)
{
currentBestGap = gap;
currentBest = c.ToList();
}
}
}
return currentBest;
}
I've an idea that you could sort the initial list and quit iterating the combinations at a certain point, when the sum must exceed the target. After some consideration, its not trivial to identify that point and, the cost of checking may exceed the benefit. This benefit would have to be balanced agaist some function of the target value and mean of the set.
I still think further optimization is possible but I also think that this work has already been done and I'd just need to look it up in the right place.
There are k such subsets of decList (k might be 0).
Assuming that you want to select each one with uniform probability 1/k, I think you basically need to do the following:
iterate over all the matching subsets
select one
Step 1 is potentially a big task, you can look into the various ways of solving the "subset sum problem" for a fixed subset size, and adapt them to generate each solution in turn.
Step 2 can be done either by making a list of all the solutions and choosing one or (if that might take too much memory) by using the clever streaming random selection algorithm.
If your data is likely to have lots of such subsets, then generating them all might be incredibly slow. In that case you might try to identify groups of them at a time. You'd have to know the size of the group without visiting its members one by one, then you can choose which group to use weighted by its size, then you've reduced the problem to selecting one of that group at random.
If you don't need to select with uniform probability then the problem might become easier. At the best case, if you don't care about the distribution at all then you can return the first subset-sum solution you find -- whether you'd call that "at random" is another matter...
Let's assume we have a large list of points List<Point> pointList (already stored in memory) where each Point contains X, Y, and Z coordinate.
Now, I would like to select for example N% of points with biggest Z-values of all points stored in pointList. Right now I'm doing it like that:
N = 0.05; // selecting only 5% of points
double cutoffValue = pointList
.OrderBy(p=> p.Z) // First bottleneck - creates sorted copy of all data
.ElementAt((int) pointList.Count * (1 - N)).Z;
List<Point> selectedPoints = pointList.Where(p => p.Z >= cutoffValue).ToList();
But I have here two memory usage bottlenecks: first during OrderBy (more important) and second during selecting the points (this is less important, because we usually want to select only small amount of points).
Is there any way of replacing OrderBy (or maybe other way of finding this cutoff point) with something that uses less memory?
The problem is quite important, because LINQ copies the whole dataset and for big files I'm processing it sometimes hits few hundreds of MBs.
Write a method that iterates through the list once and maintains a set of the M largest elements. Each step will only require O(log M) work to maintain the set, and you can have O(M) memory and O(N log M) running time.
public static IEnumerable<TSource> TakeLargest<TSource, TKey>
(this IEnumerable<TSource> items, Func<TSource, TKey> selector, int count)
{
var set = new SortedDictionary<TKey, List<TSource>>();
var resultCount = 0;
var first = default(KeyValuePair<TKey, List<TSource>>);
foreach (var item in items)
{
// If the key is already smaller than the smallest
// item in the set, we can ignore this item
var key = selector(item);
if (first.Value == null ||
resultCount < count ||
Comparer<TKey>.Default.Compare(key, first.Key) >= 0)
{
// Add next item to set
if (!set.ContainsKey(key))
{
set[key] = new List<TSource>();
}
set[key].Add(item);
if (first.Value == null)
{
first = set.First();
}
// Remove smallest item from set
resultCount++;
if (resultCount - first.Value.Count >= count)
{
set.Remove(first.Key);
resultCount -= first.Value.Count;
first = set.First();
}
}
}
return set.Values.SelectMany(values => values);
}
That will include more than count elements if there are ties, as your implementation does now.
You could sort the list in place, using List<T>.Sort, which uses the Quicksort algorithm. But of course, your original list would be sorted, which is perhaps not what you want...
pointList.Sort((a, b) => b.Z.CompareTo(a.Z));
var selectedPoints = pointList.Take((int)(pointList.Count * N)).ToList();
If you don't mind the original list being sorted, this is probably the best balance between memory usage and speed
You can use Indexed LINQ to put index on the data which you are processing. This can result in noticeable improvements in some cases.
If you combine the two there is a chance a little less work will be done:
List<Point> selectedPoints = pointList
.OrderByDescending(p=> p.Z) // First bottleneck - creates sorted copy of all data
.Take((int) pointList.Count * N);
But basically this kind of ranking requires sorting, your biggest cost.
A few more ideas:
if you use a class Point (instead of a struct Point) there will be much less copying.
you could write a custom sort that only bothers to move the top 5% up. Something like (don't laugh) BubbleSort.
If your list is in memory already, I would sort it in place instead of making a copy - unless you need it un-sorted again, that is, in which case you'll have to weigh having two copies in memory vs loading it again from storage):
pointList.Sort((x,y) => y.Z.CompareTo(x.Z)); //this should sort it in desc. order
Also, not sure how much it will help, but it looks like you're going through your list twice - once to find the cutoff value, and once again to select them. I assume you're doing that because you want to let all ties through, even if it means selecting more than 5% of the points. However, since they're already sorted, you can use that to your advantage and stop when you're finished.
double cutoffValue = pointlist[(int) pointList.Length * (1 - N)].Z;
List<point> selectedPoints = pointlist.TakeWhile(p => p.Z >= cutoffValue)
.ToList();
Unless your list is extremely large, it's much more likely to me that cpu time is your performance bottleneck. Yes, your OrderBy() might use a lot of memory, but it's generally memory that for the most part is otherwise sitting idle. The cpu time really is the bigger concern.
To improve cpu time, the most obvious thing here is to not use a list. Use an IEnumerable instead. You do this by simply not calling .ToList() at the end of your where query. This will allow the framework to combine everything into one iteration of the list that runs only as needed. It will also improve your memory use because it avoids loading the entire query into memory at once, and instead defers it to only load one item at a time as needed. Also, use .Take() rather than .ElementAt(). It's a lot more efficient.
double N = 0.05; // selecting only 5% of points
int count = (1-N) * pointList.Count;
var selectedPoints = pointList.OrderBy(p=>p.Z).Take(count);
That out of the way, there are three cases where memory use might actually be a problem:
Your collection really is so large as to fill up memory. For a simple Point structure on a modern system we're talking millions of items. This is really unlikely. On the off chance you have a system this large, your solution is to use a relational database, which can keep this items on disk relatively efficiently.
You have a moderate size collection, but there are external performance constraints, such as needing to share system resources with many other processes as you might find in an asp.net web site. In this case, the answer is either to 1) again put the points in a relational database or 2) offload the work to the client machines.
Your collection is just large enough to end up on the Large Object Heap, and the HashSet used in the OrderBy() call is also placed on the LOH. Now what happens is that the garbage collector will not properly compact memory after your OrderBy() call, and over time you get a lot of memory that is not used but still reserved by your program. In this case, the solution is, unfortunately, to break your collection up into multiple groups that are each individually small enough not to trigger use of the LOH.
Update:
Reading through your question again, I see you're reading very large files. In that case, the best performance can be obtained by writing your own code to parse the files. If the count of items is stored near the top of the file you can do much better, or even if you can estimate the number of records based on the size of the file (guess a little high to be sure, and then truncate any extras after finishing), you can then build your final collection as your read. This will greatly improve cpu performance and memory use.
I'd do it by implementing "half" a quicksort.
Consider your original set of points, P, where you are looking for the "top" N items by Z coordinate.
Choose a pivot x in P.
Partition P into L = {y in P | y < x} and U = {y in P | x <= y}.
If N = |U| then you're done.
If N < |U| then recurse with P := U.
Otherwise you need to add some items to U: recurse with N := N - |U|, P := L to add the remaining items.
If you choose your pivot wisely (e.g., median of, say, five random samples) then this will run in O(n log n) time.
Hmmmm, thinking some more, you may be able to avoid creating new sets altogether, since essentially you're just looking for an O(n log n) way of finding the Nth greatest item from the original set. Yes, I think this would work, so here's suggestion number 2:
Make a traversal of P, finding the least and greatest items, A and Z, respectively.
Let M be the mean of A and Z (remember, we're only considering Z coordinates here).
Count how many items there are in the range [M, Z], call this Q.
If Q < N then the Nth greatest item in P is somewhere in [A, M). Try M := (A + M)/2.
If N < Q then the Nth greatest item in P is somewhere in [M, Z]. Try M := (M + Z)/2.
Repeat until we find an M such that Q = N.
Now traverse P, removing all items greater than or equal to M.
That's definitely O(n log n) and creates no extra data structures (except for the result).
Howzat?
You might use something like this:
pointList.Sort(); // Use you own compare here if needed
// Skip OrderBy because the list is sorted (and not copied)
double cutoffValue = pointList.ElementAt((int) pointList.Length * (1 - N)).Z;
// Skip ToList to avoid another copy of the list
IEnumerable<Point> selectedPoints = pointList.Where(p => p.Z >= cutoffValue);
If you want a small percentage of points ordered by some criterion, you'll be better served using a Priority queue data structure; create a size-limited queue(with the size set to however many elements you want), and then just scan through the list inserting every element. After the scan, you can pull out your results in sorted order.
This has the benefit of being O(n log p) instead of O(n log n) where p is the number of points you want, and the extra storage cost is also dependent on your output size instead of the whole list.
int resultSize = pointList.Count * (1-N);
FixedSizedPriorityQueue<Point> q =
new FixedSizedPriorityQueue<Point>(resultSize, p => p.Z);
q.AddEach(pointList);
List<Point> selectedPoints = q.ToList();
Now all you have to do is implement a FixedSizedPriorityQueue that adds elements one at a time and discards the largest element when it is full.
You wrote, you are working with a DataSet. If so, you can use DataView to sort your data once and use them for all future accessing the rows.
Just tried with 50,000 rows and 100 times accessing 30% of them. My performance results are:
Sort With Linq: 5.3 seconds
Use DataViews: 0.01 seconds
Give it a try.
[TestClass]
public class UnitTest1 {
class MyTable : TypedTableBase<MyRow> {
public MyTable() {
Columns.Add("Col1", typeof(int));
Columns.Add("Col2", typeof(int));
}
protected override DataRow NewRowFromBuilder(DataRowBuilder builder) {
return new MyRow(builder);
}
}
class MyRow : DataRow {
public MyRow(DataRowBuilder builder) : base(builder) {
}
public int Col1 { get { return (int)this["Col1"]; } }
public int Col2 { get { return (int)this["Col2"]; } }
}
DataView _viewCol1Asc;
DataView _viewCol2Desc;
MyTable _table;
int _countToTake;
[TestMethod]
public void MyTestMethod() {
_table = new MyTable();
int count = 50000;
for (int i = 0; i < count; i++) {
_table.Rows.Add(i, i);
}
_countToTake = _table.Rows.Count / 30;
Console.WriteLine("SortWithLinq");
RunTest(SortWithLinq);
Console.WriteLine("Use DataViews");
RunTest(UseSoredDataViews);
}
private void RunTest(Action method) {
int iterations = 100;
Stopwatch watch = Stopwatch.StartNew();
for (int i = 0; i < iterations; i++) {
method();
}
watch.Stop();
Console.WriteLine(" {0}", watch.Elapsed);
}
private void UseSoredDataViews() {
if (_viewCol1Asc == null) {
_viewCol1Asc = new DataView(_table, null, "Col1 ASC", DataViewRowState.Unchanged);
_viewCol2Desc = new DataView(_table, null, "Col2 DESC", DataViewRowState.Unchanged);
}
var rows = _viewCol1Asc.Cast<DataRowView>().Take(_countToTake).Select(vr => (MyRow)vr.Row);
IterateRows(rows);
rows = _viewCol2Desc.Cast<DataRowView>().Take(_countToTake).Select(vr => (MyRow)vr.Row);
IterateRows(rows);
}
private void SortWithLinq() {
var rows = _table.OrderBy(row => row.Col1).Take(_countToTake);
IterateRows(rows);
rows = _table.OrderByDescending(row => row.Col2).Take(_countToTake);
IterateRows(rows);
}
private void IterateRows(IEnumerable<MyRow> rows) {
foreach (var row in rows)
if (row == null)
throw new Exception("????");
}
}