C# - Comparing items between 2 lists - c#

I've got two lists with the following identical fields (but different content):
TriangleID
Perimeter(in pixels)
My task is to extract the couples of triangles that have their perimeter difference smaller than a fixed threshold.
I'd like to do it with Linq.

It's not Linq, but collection's size (N) that matters. In the worst case (all equal triangles) you have to return all possible pairs of triangles as a solution; we have
N * N
pairs. If you have N ~ 1e6 triangles you are going to obtain as many as trillion (1e12) pairs as an answer. It's too much for the modern personal computers (in case of supercomputer, however, you can try solving the problem).
Let's assume that you don't have the worst case and you expect to obtain at most ~ N pairs. You can do it like this (C# pseudocode):
// Sort triangles by their perimeters
firstList.Sort((t1, t2) => t1.Perimeter.CompareTo(t2.Perimeter));
foreach (left in secondList) {
//TODO: you have to implement BinarySearchIndex
int from = firstList.BinarySearchIndex(left.Period - threshould);
int to = firstList.BinarySearchIndex(left.Period + threshould);
// Scan all triangles within borders
for (int i = leftBorder; i <= rightBorder; ++i) {
triangle right = firstList[i];
// return pair if right and left are different triangles
if (right.Id != left.Id)
yield return Pair(left, right);
}
}
Time complexity is
O(N * log(N)) + /* sorting */ +
O(N * log(N)) /* foreach (N) *
Binary search (log N) *
for (1 - not the worst case) */ =
O(N * log(N))

I'm assuming it's ok to return a Triangle and associate a collection of nearby other Triangles, instead of returning a list of pairs.
The idea is to sort both lists and then iterate through them. The first list will iterate over every single item, and compare the perimeter to items in the second list. But not every item in the second list needs to be checked. Since both lists are sorted, you can iterate through the second list until you no longer find perimeters within a cutoff. The other time saving method is advancing the starting index of the second list based on the difference to the perimeter in the first list; this is left as an exercise for the reader.
public class Triangle
{
public int TriangleId {get; set;}
public int Perimeter {get; set;}
}
// Returns a dictionary, that each triangle has an associated list of other triangles
// with a perimeter within a specified distance. This list may be empty.
public Dictionary<Triangle, List<Triangle>> NearbyPerimeter(List<Triangle> primary, List<Triangle> compareList, int maxDistance)
{
// sort ~ O(n log n)
// The sort is required to make an orderly advance through both lists, otherwise
// every element needs to be compared to every other element.
var sorteda = primary.OrderBy(x => x.Perimeter);
// Call ToList to allow indexing with []
var sortedb = compareList.OrderBy(x => x.Perimeter).ToList();
var results = new Dictionary<Triangle, List<Triangle>>();
int minCompareIndex = 0;
int compareCount = compareList.Count;
// ~ O(n)
foreach (var tprime in sorteda)
{
var neighbors = new List<Triangle>();
// Add logic to advance minCompareIndex based on
// which is larger, tprime.Perimeter or sortedb[minCompareIndex].Perimeter
int i = minCompareIndex;
var foundMatch = false;
// Until the missing logic above is added, this is O(n) x O(n) so ~ O(n^2)
while (i < compareCount)
{
var second = sortedb[i];
if (Math.Abs(tprime.Perimeter - second.Perimeter) < maxDistance)
{
neighbors.Add(second);
foundMatch = true;
}
else if (foundMatch)
{
break;
}
i++;
}
results.Add(tprime, neighbors);
}
return results;
}

You can do it with Linq but with a small artefact - basically what you need is a Cartesian product of both collections and a filter on the differences. The Cartesian product can be obtained using Join and an always true condition.
The code below should do the trick (I'm assuming the lists contain a class called Triangle; if this is not the case adjust the code to your needs):
var results = list1.Join(list2,
_ => true,
_ => true,
(t1, t2) => new { Triangle1 = t1, Triangle2 = t2})
.Where(pair = > Math.Abs(pair.Triangle1.Perimeter - pair.Triangle2.Perimeter) < threshold)
.Select(pair => new{/*…*/});

Related

Group List into Ranges By Specific Amount

Let's say I have a List of items in which look like this:
Number Amount
1 10
2 12
5 5
6 9
9 4
10 3
11 1
I need it so that the method takes in any number even as a decimal and use that number to group the list into ranges based on that number. So let's say my number was 1 the following output would be...
Ranges Total
1-2 22
5-6 14
9-11 8
Because it basically grouped the numbers that are 1 away from each other into ranges. What's the most efficient way I can convert my list to look like the output?
There are a couple of approaches to this. Either you can partition the data and then sum on the partitions, or you can roll the whole thing into a single method.
Since partitioning is based on the gaps between the Number values you won't be able to work on unordered lists. Building the partition list on the fly isn't going to work if the list isn't ordered, so make sure you sort the list on the partition field before you start.
Partitioning
Once the lists is ordered (or if it was pre-ordered) you can partition. I use this kind of extension method fairly often for breaking up ordered sequences into useful blocks, like when I need to grab sequences of entries from a log file.
public static partial class Ext
{
public static IEnumerable<T[]> PartitionStream<T>(this IEnumerable<T> source, Func<T, T, bool> partitioner)
{
var partition = new List<T>();
T prev = default;
foreach (var next in source)
{
if (partition.Count > 0 && !partitioner(prev, next))
{
new { p = partition.ToArray(), prev, next }.Dump();
yield return partition.ToArray();
partition.Clear();
}
partition.Add(prev = next);
}
if (partition.Count > 0)
yield return partition.ToArray();
}
}
The partitioner parameter compares two objects and returns true if they belong in the same partition. The extension method just collects all the members of the partition together and returns them as an array once it finds something for the next partition.
From there you can just do simple summing on the partition arrays:
var source = new (int n, int v)[] { (1,10),(2,12),(5,5),(6,9),(9,4),(10,3),(11,1) };
var maxDifference = 2;
var aggregate =
from part in source.PartitionStream((l, r) => (r.n - l.n) <= maxDifference)
let low = grp.Min(g => g.n)
let high = grp.Max(g => g.n)
select new { Ranges = $"{low}-{high}", Total = grp.Sum(g => g.v) };
This gives the same output as your example.
Stream Aggregation
The second option is both simpler and more efficient since it does barely any memory allocations. The downside - if you can call it that - is that it's a lot less generic.
Rather than partitioning and aggregating over the partitions, this just walks through the list and aggregates as it goes, spitting out results when the partitioning criteria is reached:
IEnumerable<(string Ranges, int Total)> GroupSum(IEnumerable<(int n, int v)> source, int maxDistance)
{
int low = int.MaxValue;
int high = 0;
int total = 0;
foreach (var (n, v) in source)
{
// check partition boundary
if (n < low || (n - high) > maxDistance)
{
if (n > low)
yield return ($"{low}-{high}", total);
low = high = n;
total = v;
}
else
{
high = n;
total += v;
}
}
if (total > 0)
yield return ($"{low}-{high}", total);
}
(Using ValueTuple so I don't have to declare types.)
Output is the same here, but with a lot less going on in the background to slow it down. No allocated arrays, etc.

C# fastest intersection of 2 sets of sorted numbers

I'm calculating intersection of 2 sets of sorted numbers in a time-critical part of my application. This calculation is the biggest bottleneck of the whole application so I need to speed it up.
I've tried a bunch of simple options and am currently using this:
foreach (var index in firstSet)
{
if (secondSet.BinarySearch(index) < 0)
continue;
//do stuff
}
Both firstSet and secondSet are of type List.
I've also tried using LINQ:
var intersection = firstSet.Where(t => secondSet.BinarySearch(t) >= 0).ToList();
and then looping through intersection.
But as both of these sets are sorted I feel there's a better way to do it. Note that I can't remove items from sets to make them smaller. Both sets usually consist of about 50 items each.
Please help me guys as I don't have a lot of time to get this thing done. Thanks.
NOTE: I'm doing this about 5.3 million times. So every microsecond counts.
If you have two sets which are both sorted, you can implement a faster intersection than anything provided out of the box with LINQ.
Basically, keep two IEnumerator<T> cursors open, one for each set. At any point, advance whichever has the smaller value. If they match at any point, advance them both, and so on until you reach the end of either iterator.
The nice thing about this is that you only need to iterate over each set once, and you can do it in O(1) memory.
Here's a sample implementation - untested, but it does compile :) It assumes that both of the incoming sequences are duplicate-free and sorted, both according to the comparer provided (pass in Comparer<T>.Default):
(There's more text at the end of the answer!)
static IEnumerable<T> IntersectSorted<T>(this IEnumerable<T> sequence1,
IEnumerable<T> sequence2,
IComparer<T> comparer)
{
using (var cursor1 = sequence1.GetEnumerator())
using (var cursor2 = sequence2.GetEnumerator())
{
if (!cursor1.MoveNext() || !cursor2.MoveNext())
{
yield break;
}
var value1 = cursor1.Current;
var value2 = cursor2.Current;
while (true)
{
int comparison = comparer.Compare(value1, value2);
if (comparison < 0)
{
if (!cursor1.MoveNext())
{
yield break;
}
value1 = cursor1.Current;
}
else if (comparison > 0)
{
if (!cursor2.MoveNext())
{
yield break;
}
value2 = cursor2.Current;
}
else
{
yield return value1;
if (!cursor1.MoveNext() || !cursor2.MoveNext())
{
yield break;
}
value1 = cursor1.Current;
value2 = cursor2.Current;
}
}
}
}
EDIT: As noted in comments, in some cases you may have one input which is much larger than the other, in which case you could potentially save a lot of time using a binary search for each element from the smaller set within the larger set. This requires random access to the larger set, however (it's just a prerequisite of binary search). You can even make it slightly better than a naive binary search by using the match from the previous result to give a lower bound to the binary search. So suppose you were looking for values 1000, 2000 and 3000 in a set with every integer from 0 to 19,999. In the first iteration, you'd need to look across the whole set - your starting lower/upper indexes would be 0 and 19,999 respectively. After you'd found a match at index 1000, however, the next step (where you're looking for 2000) can start with a lower index of 2000. As you progress, the range in which you need to search gradually narrows. Whether or not this is worth the extra implementation cost or not is a different matter, however.
Since both lists are sorted, you can arrive at the solution by iterating over them at most once (you may also get to skip part of one list, depending on the actual values they contain).
This solution keeps a "pointer" to the part of list we have not yet examined, and compares the first not-examined number of each list between them. If one is smaller than the other, the pointer to the list it belongs to is incremented to point to the next number. If they are equal, the number is added to the intersection result and both pointers are incremented.
var firstCount = firstSet.Count;
var secondCount = secondSet.Count;
int firstIndex = 0, secondIndex = 0;
var intersection = new List<int>();
while (firstIndex < firstCount && secondIndex < secondCount)
{
var comp = firstSet[firstIndex].CompareTo(secondSet[secondIndex]);
if (comp < 0) {
++firstIndex;
}
else if (comp > 0) {
++secondIndex;
}
else {
intersection.Add(firstSet[firstIndex]);
++firstIndex;
++secondIndex;
}
}
The above is a textbook C-style approach of solving this particular problem, and given the simplicity of the code I would be surprised to see a faster solution.
You're using a rather inefficient Linq method for this sort of task, you should opt for Intersect as a starting point.
var intersection = firstSet.Intersect(secondSet);
Try this. If you measure it for performance and still find it unwieldy, cry for further help (or perhaps follow Jon Skeet's approach).
I was using Jon's approach but needed to execute this intersect hundreds of thousands of times for a bulk operation on very large sets and needed more performance. The case I was running in to was heavily imbalanced sizes of the lists (eg 5 and 80,000) and wanted to avoid iterating the entire large list.
I found that detecting the imbalance and changing to an alternate algorithm gave me huge benifits over specific data sets:
public static IEnumerable<T> IntersectSorted<T>(this List<T> sequence1,
List<T> sequence2,
IComparer<T> comparer)
{
List<T> smallList = null;
List<T> largeList = null;
if (sequence1.Count() < Math.Log(sequence2.Count(), 2))
{
smallList = sequence1;
largeList = sequence2;
}
else if (sequence2.Count() < Math.Log(sequence1.Count(), 2))
{
smallList = sequence2;
largeList = sequence1;
}
if (smallList != null)
{
foreach (var item in smallList)
{
if (largeList.BinarySearch(item, comparer) >= 0)
{
yield return item;
}
}
}
else
{
//Use Jon's method
}
}
I am still unsure about the point at which you break even, need to do some more testing
try
firstSet.InterSect (secondSet).ToList ()
or
firstSet.Join(secondSet, o => o, id => id, (o, id) => o)

How to merge 2 sorted listed into one shuffled list while keeping internal order in c#

I want to generate a shuffled merged list that will keep the internal order of the lists.
For example:
list A: 11 22 33
list B: 6 7 8
valid result: 11 22 6 33 7 8
invalid result: 22 11 7 6 33 8
Just randomly select a list (e.g. generate a random number between 0 and 1, if < 0.5 list A, otherwise list B) and then take the element from that list and add it to you new list. Repeat until you have no elements left in each list.
Generate A.Length random integers in the interval [0, B.Length). Sort the random numbers, then iterate i from 0..A.Length adding A[i] to into position r[i]+i in B. The +i is because you're shifting the original values in B to the right as you insert values from A.
This will be as random as your RNG.
None of the answers provided in this page work if you need the outputs to be uniformly distributed.
To illustrate my examples, assume we are merging two lists A=[1,2,3], B=[a,b,c]
In the approach mentioned in most answers (i.e. merging two lists a la mergesort, but choosing a list head randomly each time), the output [1 a 2 b 3 c] is far less likely than [1 2 3 a b c]. Intuitively, this happens because when you run out of elements in a list, then the elements on the other list are appended at the end. Because of that, the probability for the first case is 0.5*0.5*0.5 = 0.5^3 = 0.125, but in the second case, there are more random random events, since a random head has to be picked 5 times instead of just 3, leaving us with a probability of 0.5^5 = 0.03125. An empirical evaluation also easily validates these results.
The answer suggested by #marcog is almost correct. However, there is an issue where the distribution of r is not uniform after sorting it. This happens because original lists [0,1,2], [2,1,0], [2,1,0] all get sorted into [0,1,2], making this sorted r more likely than, for example, [0,0,0] for which there is only one possibility.
There is a clever way of generating the list r in such a way that it is uniformly distributed, as seen in this Math StackExchange question: https://math.stackexchange.com/questions/3218854/randomly-generate-a-sorted-set-with-uniform-distribution
To summarize the answer to that question, you must sample |B| elements (uniformly at random, and without repetition) from the set {0,1,..|A|+|B|-1}, sort the result and then subtract its index to each element in this new list. The result is the list r that can be used in replacement at #marcog's answer.
Original Answer:
static IEnumerable<T> MergeShuffle<T>(IEnumerable<T> lista, IEnumerable<T> listb)
{
var first = lista.GetEnumerator();
var second = listb.GetEnumerator();
var rand = new Random();
bool exhaustedA = false;
bool exhaustedB = false;
while (!(exhaustedA && exhaustedB))
{
bool found = false;
if (!exhaustedB && (exhaustedA || rand.Next(0, 2) == 0))
{
exhaustedB = !(found = second.MoveNext());
if (found)
yield return second.Current;
}
if (!found && !exhaustedA)
{
exhaustedA = !(found = first.MoveNext());
if (found)
yield return first.Current;
}
}
}
Second answer based on marcog's answer
static IEnumerable<T> MergeShuffle<T>(IEnumerable<T> lista, IEnumerable<T> listb)
{
int total = lista.Count() + listb.Count();
var random = new Random();
var indexes = Enumerable.Range(0, total-1)
.OrderBy(_=>random.NextDouble())
.Take(lista.Count())
.OrderBy(x=>x)
.ToList();
var first = lista.GetEnumerator();
var second = listb.GetEnumerator();
for (int i = 0; i < total; i++)
if (indexes.Contains(i))
{
first.MoveNext();
yield return first.Current;
}
else
{
second.MoveNext();
yield return second.Current;
}
}
Rather than generating a list of indices, this can be done by adjusting the probabilities based on the number of elements left in each list. On each iteration, A will have A_size elements remaining, and B will have B_size elements remaining. Choose a random number R from 1..(A_size + B_size). If R <= A_size, then use an element from A as the next element in the output. Otherwise use an element from B.
int A[] = {11, 22, 33}, A_pos = 0, A_remaining = 3;
int B[] = {6, 7, 8}, B_pos = 0, B_remaining = 3;
while (A_remaining || B_remaining) {
int r = rand() % (A_remaining + B_remaining);
if (r < A_remaining) {
printf("%d ", A[A_pos++]);
A_remaining--;
} else {
printf("%d ", B[B_pos++]);
B_remaining--;
}
}
printf("\n");
As a list gets smaller, the probability an element gets chosen from it will decrease.
This can be scaled to multiple lists. For example, given lists A, B, and C with sizes A_size, B_size, and C_size, choose R in 1..(A_size+B_size+C_size). If R <= A_size, use an element from A. Otherwise, if R <= A_size+B_size use an element from B. Otherwise C.
Here is a solution that ensures a uniformly distributed output, and is easy to reason why. The idea is first to generate a list of tokens, where each token represent an element of a specific list, but not a specific element. For example for two lists having 3 elements each, we generate this list of tokens: 0, 0, 0, 1, 1, 1. Then we shuffle the tokens. Finally we yield an element for each token, selecting the next element from the corresponding original list.
public static IEnumerable<T> MergeShufflePreservingOrder<T>(
params IEnumerable<T>[] sources)
{
var random = new Random();
var queues = sources
.Select(source => new Queue<T>(source))
.ToArray();
var tokens = queues
.SelectMany((queue, i) => Enumerable.Repeat(i, queue.Count))
.ToArray();
Shuffle(tokens);
return tokens.Select(token => queues[token].Dequeue());
void Shuffle(int[] array)
{
for (int i = 0; i < array.Length; i++)
{
int j = random.Next(i, array.Length);
if (i == j) continue;
if (array[i] == array[j]) continue;
var temp = array[i];
array[i] = array[j];
array[j] = temp;
}
}
}
Usage example:
var list1 = "ABCDEFGHIJKL".ToCharArray();
var list2 = "abcd".ToCharArray();
var list3 = "#".ToCharArray();
var merged = MergeShufflePreservingOrder(list1, list2, list3);
Console.WriteLine(String.Join("", merged));
Output:
ABCDaEFGHIb#cJKLd
This might be easier, assuming you have a list of three values in order that match 3 values in another table.
You can also sequence with the identity using identity (1,2)
Create TABLE #tmp1 (ID int identity(1,1),firstvalue char(2),secondvalue char(2))
Create TABLE #tmp2 (ID int identity(1,1),firstvalue char(2),secondvalue char(2))
Insert into #tmp1(firstvalue,secondvalue) Select firstvalue,null secondvalue from firsttable
Insert into #tmp2(firstvalue,secondvalue) Select null firstvalue,secondvalue from secondtable
Select a.firstvalue,b.secondvalue from #tmp1 a join #tmp2 b on a.id=b.id
DROP TABLE #tmp1
DROP TABLE #tmp2

How to avoid OrderBy - memory usage problems

Let's assume we have a large list of points List<Point> pointList (already stored in memory) where each Point contains X, Y, and Z coordinate.
Now, I would like to select for example N% of points with biggest Z-values of all points stored in pointList. Right now I'm doing it like that:
N = 0.05; // selecting only 5% of points
double cutoffValue = pointList
.OrderBy(p=> p.Z) // First bottleneck - creates sorted copy of all data
.ElementAt((int) pointList.Count * (1 - N)).Z;
List<Point> selectedPoints = pointList.Where(p => p.Z >= cutoffValue).ToList();
But I have here two memory usage bottlenecks: first during OrderBy (more important) and second during selecting the points (this is less important, because we usually want to select only small amount of points).
Is there any way of replacing OrderBy (or maybe other way of finding this cutoff point) with something that uses less memory?
The problem is quite important, because LINQ copies the whole dataset and for big files I'm processing it sometimes hits few hundreds of MBs.
Write a method that iterates through the list once and maintains a set of the M largest elements. Each step will only require O(log M) work to maintain the set, and you can have O(M) memory and O(N log M) running time.
public static IEnumerable<TSource> TakeLargest<TSource, TKey>
(this IEnumerable<TSource> items, Func<TSource, TKey> selector, int count)
{
var set = new SortedDictionary<TKey, List<TSource>>();
var resultCount = 0;
var first = default(KeyValuePair<TKey, List<TSource>>);
foreach (var item in items)
{
// If the key is already smaller than the smallest
// item in the set, we can ignore this item
var key = selector(item);
if (first.Value == null ||
resultCount < count ||
Comparer<TKey>.Default.Compare(key, first.Key) >= 0)
{
// Add next item to set
if (!set.ContainsKey(key))
{
set[key] = new List<TSource>();
}
set[key].Add(item);
if (first.Value == null)
{
first = set.First();
}
// Remove smallest item from set
resultCount++;
if (resultCount - first.Value.Count >= count)
{
set.Remove(first.Key);
resultCount -= first.Value.Count;
first = set.First();
}
}
}
return set.Values.SelectMany(values => values);
}
That will include more than count elements if there are ties, as your implementation does now.
You could sort the list in place, using List<T>.Sort, which uses the Quicksort algorithm. But of course, your original list would be sorted, which is perhaps not what you want...
pointList.Sort((a, b) => b.Z.CompareTo(a.Z));
var selectedPoints = pointList.Take((int)(pointList.Count * N)).ToList();
If you don't mind the original list being sorted, this is probably the best balance between memory usage and speed
You can use Indexed LINQ to put index on the data which you are processing. This can result in noticeable improvements in some cases.
If you combine the two there is a chance a little less work will be done:
List<Point> selectedPoints = pointList
.OrderByDescending(p=> p.Z) // First bottleneck - creates sorted copy of all data
.Take((int) pointList.Count * N);
But basically this kind of ranking requires sorting, your biggest cost.
A few more ideas:
if you use a class Point (instead of a struct Point) there will be much less copying.
you could write a custom sort that only bothers to move the top 5% up. Something like (don't laugh) BubbleSort.
If your list is in memory already, I would sort it in place instead of making a copy - unless you need it un-sorted again, that is, in which case you'll have to weigh having two copies in memory vs loading it again from storage):
pointList.Sort((x,y) => y.Z.CompareTo(x.Z)); //this should sort it in desc. order
Also, not sure how much it will help, but it looks like you're going through your list twice - once to find the cutoff value, and once again to select them. I assume you're doing that because you want to let all ties through, even if it means selecting more than 5% of the points. However, since they're already sorted, you can use that to your advantage and stop when you're finished.
double cutoffValue = pointlist[(int) pointList.Length * (1 - N)].Z;
List<point> selectedPoints = pointlist.TakeWhile(p => p.Z >= cutoffValue)
.ToList();
Unless your list is extremely large, it's much more likely to me that cpu time is your performance bottleneck. Yes, your OrderBy() might use a lot of memory, but it's generally memory that for the most part is otherwise sitting idle. The cpu time really is the bigger concern.
To improve cpu time, the most obvious thing here is to not use a list. Use an IEnumerable instead. You do this by simply not calling .ToList() at the end of your where query. This will allow the framework to combine everything into one iteration of the list that runs only as needed. It will also improve your memory use because it avoids loading the entire query into memory at once, and instead defers it to only load one item at a time as needed. Also, use .Take() rather than .ElementAt(). It's a lot more efficient.
double N = 0.05; // selecting only 5% of points
int count = (1-N) * pointList.Count;
var selectedPoints = pointList.OrderBy(p=>p.Z).Take(count);
That out of the way, there are three cases where memory use might actually be a problem:
Your collection really is so large as to fill up memory. For a simple Point structure on a modern system we're talking millions of items. This is really unlikely. On the off chance you have a system this large, your solution is to use a relational database, which can keep this items on disk relatively efficiently.
You have a moderate size collection, but there are external performance constraints, such as needing to share system resources with many other processes as you might find in an asp.net web site. In this case, the answer is either to 1) again put the points in a relational database or 2) offload the work to the client machines.
Your collection is just large enough to end up on the Large Object Heap, and the HashSet used in the OrderBy() call is also placed on the LOH. Now what happens is that the garbage collector will not properly compact memory after your OrderBy() call, and over time you get a lot of memory that is not used but still reserved by your program. In this case, the solution is, unfortunately, to break your collection up into multiple groups that are each individually small enough not to trigger use of the LOH.
Update:
Reading through your question again, I see you're reading very large files. In that case, the best performance can be obtained by writing your own code to parse the files. If the count of items is stored near the top of the file you can do much better, or even if you can estimate the number of records based on the size of the file (guess a little high to be sure, and then truncate any extras after finishing), you can then build your final collection as your read. This will greatly improve cpu performance and memory use.
I'd do it by implementing "half" a quicksort.
Consider your original set of points, P, where you are looking for the "top" N items by Z coordinate.
Choose a pivot x in P.
Partition P into L = {y in P | y < x} and U = {y in P | x <= y}.
If N = |U| then you're done.
If N < |U| then recurse with P := U.
Otherwise you need to add some items to U: recurse with N := N - |U|, P := L to add the remaining items.
If you choose your pivot wisely (e.g., median of, say, five random samples) then this will run in O(n log n) time.
Hmmmm, thinking some more, you may be able to avoid creating new sets altogether, since essentially you're just looking for an O(n log n) way of finding the Nth greatest item from the original set. Yes, I think this would work, so here's suggestion number 2:
Make a traversal of P, finding the least and greatest items, A and Z, respectively.
Let M be the mean of A and Z (remember, we're only considering Z coordinates here).
Count how many items there are in the range [M, Z], call this Q.
If Q < N then the Nth greatest item in P is somewhere in [A, M). Try M := (A + M)/2.
If N < Q then the Nth greatest item in P is somewhere in [M, Z]. Try M := (M + Z)/2.
Repeat until we find an M such that Q = N.
Now traverse P, removing all items greater than or equal to M.
That's definitely O(n log n) and creates no extra data structures (except for the result).
Howzat?
You might use something like this:
pointList.Sort(); // Use you own compare here if needed
// Skip OrderBy because the list is sorted (and not copied)
double cutoffValue = pointList.ElementAt((int) pointList.Length * (1 - N)).Z;
// Skip ToList to avoid another copy of the list
IEnumerable<Point> selectedPoints = pointList.Where(p => p.Z >= cutoffValue);
If you want a small percentage of points ordered by some criterion, you'll be better served using a Priority queue data structure; create a size-limited queue(with the size set to however many elements you want), and then just scan through the list inserting every element. After the scan, you can pull out your results in sorted order.
This has the benefit of being O(n log p) instead of O(n log n) where p is the number of points you want, and the extra storage cost is also dependent on your output size instead of the whole list.
int resultSize = pointList.Count * (1-N);
FixedSizedPriorityQueue<Point> q =
new FixedSizedPriorityQueue<Point>(resultSize, p => p.Z);
q.AddEach(pointList);
List<Point> selectedPoints = q.ToList();
Now all you have to do is implement a FixedSizedPriorityQueue that adds elements one at a time and discards the largest element when it is full.
You wrote, you are working with a DataSet. If so, you can use DataView to sort your data once and use them for all future accessing the rows.
Just tried with 50,000 rows and 100 times accessing 30% of them. My performance results are:
Sort With Linq: 5.3 seconds
Use DataViews: 0.01 seconds
Give it a try.
[TestClass]
public class UnitTest1 {
class MyTable : TypedTableBase<MyRow> {
public MyTable() {
Columns.Add("Col1", typeof(int));
Columns.Add("Col2", typeof(int));
}
protected override DataRow NewRowFromBuilder(DataRowBuilder builder) {
return new MyRow(builder);
}
}
class MyRow : DataRow {
public MyRow(DataRowBuilder builder) : base(builder) {
}
public int Col1 { get { return (int)this["Col1"]; } }
public int Col2 { get { return (int)this["Col2"]; } }
}
DataView _viewCol1Asc;
DataView _viewCol2Desc;
MyTable _table;
int _countToTake;
[TestMethod]
public void MyTestMethod() {
_table = new MyTable();
int count = 50000;
for (int i = 0; i < count; i++) {
_table.Rows.Add(i, i);
}
_countToTake = _table.Rows.Count / 30;
Console.WriteLine("SortWithLinq");
RunTest(SortWithLinq);
Console.WriteLine("Use DataViews");
RunTest(UseSoredDataViews);
}
private void RunTest(Action method) {
int iterations = 100;
Stopwatch watch = Stopwatch.StartNew();
for (int i = 0; i < iterations; i++) {
method();
}
watch.Stop();
Console.WriteLine(" {0}", watch.Elapsed);
}
private void UseSoredDataViews() {
if (_viewCol1Asc == null) {
_viewCol1Asc = new DataView(_table, null, "Col1 ASC", DataViewRowState.Unchanged);
_viewCol2Desc = new DataView(_table, null, "Col2 DESC", DataViewRowState.Unchanged);
}
var rows = _viewCol1Asc.Cast<DataRowView>().Take(_countToTake).Select(vr => (MyRow)vr.Row);
IterateRows(rows);
rows = _viewCol2Desc.Cast<DataRowView>().Take(_countToTake).Select(vr => (MyRow)vr.Row);
IterateRows(rows);
}
private void SortWithLinq() {
var rows = _table.OrderBy(row => row.Col1).Take(_countToTake);
IterateRows(rows);
rows = _table.OrderByDescending(row => row.Col2).Take(_countToTake);
IterateRows(rows);
}
private void IterateRows(IEnumerable<MyRow> rows) {
foreach (var row in rows)
if (row == null)
throw new Exception("????");
}
}

Linq to find pair of points with longest length?

I have the following code:
foreach (Tuple<Point, Point> pair in pointsCollection)
{
var points = new List<Point>()
{
pair.Value1,
pair.Value2
};
}
Within this foreach, I would like to be able to determine which pair of points has the most significant length between the coordinates for each point within the pair.
So, let's say that points are made up of the following pairs:
(1) var points = new List<Point>()
{
new Point(0,100),
new Point(100,100)
};
(2) var points = new List<Point>()
{
new Point(150,100),
new Point(200,100)
};
So I have two sets of pairs, mentioned above. They both will plot a horizontal line. I am interested in knowing what the best approach would be to find the pair of points that have the greatest distance between, them, whether it is vertically or horizontally. In the two examples above, the first pair of points has a difference of 100 between the X coordinate, so that would be the point with the most significant difference. But if I have a collection of pairs of points, where some points will plot a vertical line, some points will plot a horizontal line, what would be the best approach for retrieving the pair from the set of points whose difference, again vertically or horizontally, is the greatest among all of the points in the collection?
Thanks!
Chris
Use OrderBy to create an ordering based on your criteria, then select the first one. In this case order by the maximum absolute difference between the horizontal and vertical components in descending order.
EDIT: Actually, I think you should be doing this on the Tuples themselves, right? I'll work on adapting the example to that.
First, let's add an extension for Tuple<Point,Point> to calculate it's length.
public static class TupleExtensions
{
public static double Length( this Tuple<Point,Point> tuple )
{
var first = tuple.Item1;
var second = tuple.Item2;
double deltaX = first.X - second.X;
double deltaY = first.y - second.Y;
return Math.Sqrt( deltaX * deltaX + deltaY * deltaY );
}
}
Now we can order the tuples by their length
var max = pointCollection.OrderByDescending( t => t.Length() )
.FirstOrDefault();
Note: it is faster to just iterate over the collection and keep track of the maximum rather than sorting/selecting with LINQ.
Tuple<Point,Point> max = null;
foreach (var tuple in pointCollection)
{
if (max == null || tuple.Length() > max.Length())
{
max = tuple;
}
}
Obviously, this could be refactored to an IEnumerable extension if you used it in more than one place.
You'll need a function probably using the pythagorean theorem to calculate the distances
a^2 + b^2 = c^2
Where a would be the difference in Point.X, b would be the difference in Point.Y, and c would be your distance. And once that function has been written, then you can go to LINQ and order on the results.
Here's what I did. (Note: I do not have C# 4, so it's not apples to apples
private double GetDistance(Point a, Point b)
{
return Math.Pow(Math.Pow(Math.Abs(a.X - b.X), 2) + Math.Pow(Math.Abs(a.Y - b.Y), 2), 0.5);
}
You can turn that into an anonymous method or Func if you prefer, obviously.
var query = pointlistCollection.OrderByDescending(pair => GetDistance(pair[0], pair[1])).First();
Where pointlistCollection is a List<List<Point>>, each inner list having two items. Quick example, but it works.
List<List<Point>> pointlistCollection
= new List<List<Point>>()
{
new List<Point>() { new Point(0,0), new Point(3,4)},
new List<Point>() { new Point(5,5), new Point (3,7)}
};
***Here is my GetDistance function in Func form.
Func<Point, Point, double> getDistance
= (a, b)
=> Math.Pow(Math.Pow(Math.Abs(a.X - b.X), 2) + Math.Pow(Math.Abs(a.Y - b.Y), 2), 0.5);
var query = pointlistCollection.OrderByDescending(pair => getDistance(pair[0], pair[1])).First();
As commented above: Don't sort the list in order to get a maximum.
public static double Norm(Point x, Point y)
{
return Math.Sqrt(Math.Pow(x.X - y.X, 2) + Math.Pow(x.Y - y.Y, 2));
}
Max() needs only O(n) instead of O(n*log n)
pointsCollection.Max(t => Norm(t.Item1, t.Item2));
tvanfosson's answer is good, however I would like to suggest a slight improvement : you don't actually need to sort the collection to find the max, you just have to enumerate the collection and keep track of the maximum value. Since it's a very common scenario, I wrote an extension method to handle it :
public static class EnumerableExtensions
{
public static T WithMax<T, TValue>(this IEnumerable<T> source, Func<T, TValue> selector)
{
var max = default(TValue);
var withMax = default(T);
bool first = true;
foreach (var item in source)
{
var value = selector(item);
int compare = Comparer<TValue>.Default.Compare(value, max);
if (compare > 0 || first)
{
max = value;
withMax = item;
}
first = false;
}
return withMax;
}
}
You can then do something like that :
Tuple<Point, Point> max = pointCollection.WithMax(t => t.Length());

Categories