Take a custom IComparer, that treats two doubles as equal if their difference is less than a given epsilon.
What would happen if this IComparer is used in a OrderBy().ThenBy() clause?
Specifically I am thinking of the following implementation:
public class EpsilonComparer : IComparer<double>
{
private readonly double epsilon;
public EpsilonComparer(double epsilon)
{
this.epsilon = epsilon;
}
public int Compare(double d1, double d2)
{
if (Math.Abs(d1-d2)<=epsilon) return 0;
return d1.CompareTo(d2);
}
}
Now this IComparer relationship is clearly not transitive. (if a ~ b and b ~ c then a ~ c)
With epsilon== 0.6 :
Compare(1, 1.5) == 0
Compare(1.5, 2) == 0
yet
Compare(1, 2 ) == -1
What would happen if this IComparer was used in an OrderBy query, like this:
List<Item> itemlist;
itemList = itemlist.OrderBy(item=>item.X, new EpsilonComparer(0.352))
.ThenBy (item=>item.Y, new EpsilonComparer(1.743)).ToList();
Would the sort behave as one would expect, sorting the list first by X, then by Y, while treating roughly equal values as exactly equal?
Would it blow up under certain circumstances?
Or is this whole sort ill-defined?
What exactly are the consequences of using an IComparer without transitivity?
(I know that this is most likely undefined behavior of the c# language. I am still very much interested in an answer.)
And is there an alternative way to get this sorting behaviour?
(besides rounding the values, which would introduce artifacts when for two close doubles one is rounded up and the other down)
An online fidle of the code in this question is available here:
The problem is that the first sorting level (on X) can result in different orders already. Imagine that all items are within one epsilon of each other. Then all sort orders are consistent with your comparer because it will always return 0. The sort algorithm could flip coins and still provide a "right" answer. Not useful.
If the first level is arbitrarily sorted, you cannot expect the 2nd sorting level to work.
Of course, all of this discussion is moot because you are violating the precondition of the sorting API. Even if it happened to work, you couldn't be sure that it would work on a) all data b) on all future releases of .NET.
How can you still achieve your goal? Your problem is just ill-defined because many solutions are possible. I get what you want to achieve but it is not possible with your current definition of the problem.
I propose this: sort all items by X (without epsilon). Then, traverse the sorted items left-to-right and merge the items into groups that one epsilon wide at most. That gives you groups of items whose X-value is at most epsilon apart.
You can then use the group number as the first sorting level. It is just a simple integer, so no trouble sorting on it. For the Y field, you can then use a normal comparer without epsilon (or repeat the same trick).
view my code snippet here. it's just for first level sorting and not optimized.
OrderBy and ThenBy are using general algorithm. you need to re-implement OrderBy and ThenBy with special algorithm like mine. then it can work as OrderBy().ThenBy().
Detail of algorithm:
in a sorted sequence(x1 x2 x3...) under your EpsilonComparer, if x4>x1, then x5>x1. if x4=x1, then x3=x1 and either x5>x1 or x5=x1.
with epsilon(0.4), input following numbers:0.1, 0.6, 1, 1.1, 1.6, 2, 2, 2.6, 3, 3.1, 3.6, 4, 4.1, 4.6, 5, 5.1, 5.6, 6, 6.1, 6.6
result:0.1 0.6 1 1.1 (1.6 2 2 ) 2.6 3 3.1 3.6 4 4.1 4.6 (5 5.1 ) 5.6 (6 6.1 ) 6.6
(a,b,c) indicates these numbers are equal and the order of numbers is not fixed. they can be (a,b,c), (c,a,b) and any other order.
a b indicates a < b and order is fixed.
using System;
using System.Collections.Generic;
using System.Linq;
namespace Rextester
{
class Program
{
public static void Main(string[] args)
{
new EpsilonSort(new EpsilonComparer(0.4), 0.1, 0.6, 1, 1.1, 1.6, 2, 2, 2.6, 3, 3.1, 3.6, 4, 4.1, 4.6, 5, 5.1, 5.6, 6, 6.1, 6.6).Sort();
}
}
public class EpsilonSort
{
private readonly IComparer<double> m_comparer;
private readonly double[] m_nums;
public EpsilonSort(IComparer<double> comparer, params double[] nums)
{
this.m_comparer = comparer;
this.m_nums = nums;
}
public void Sort()
{
Node root = new Node();
root.Datas = new List<double>(this.m_nums);
foreach (double i in (double[])this.m_nums.Clone())
{
this.ProcessNode(i, root);
}
this.OutputNodes(root);
}
private void OutputNodes(Node root)
{
if (root.Datas == null)
{
foreach (var i in root.Nodes)
{
this.OutputNodes(i);
}
}
else
{
if (root.Datas.Count == 1)
{
Console.WriteLine(root.Datas[0]);
}
else
{
Console.Write('(');
foreach (var i in root.Datas)
{
Console.Write(i);
Console.Write(' ');
}
Console.WriteLine(')');
}
}
}
private void ProcessNode(double value, Node one)
{
if (one.Datas == null)
{
foreach (var i in one.Nodes)
{
this.ProcessNode(value, i);
}
}
else
{
Node[] childrennodes = new Node[3];
foreach (var i in one.Datas)
{
int direction = this.m_comparer.Compare(i, value);
if (direction == 0)
{
this.AddData(ref childrennodes[1], i);
}
else
{
if (direction < 0)
{
this.AddData(ref childrennodes[0], i);
}
else
{
this.AddData(ref childrennodes[2], i);
}
}
}
childrennodes = childrennodes.Where(x => x != null).ToArray();
if (childrennodes.Length >= 2)
{
one.Datas = null;
one.Nodes = childrennodes;
}
}
}
private void AddData(ref Node node, double value)
{
node = node ?? new Node();
node.Datas = node.Datas ?? new List<double>();
node.Datas.Add(value);
}
private class Node
{
public Node[] Nodes;
public List<double> Datas;
}
}
public class EpsilonComparer : IComparer<double>
{
private readonly double epsilon;
public EpsilonComparer(double epsilon)
{
this.epsilon = epsilon;
}
public int Compare(double d1, double d2)
{
if (Math.Abs(d1 - d2) <= epsilon) return 0;
return d1.CompareTo(d2);
}
}
}
Related
I need to return a value that corresponds to some weighting for a calculation based on age.
Here's the age ranges and weights:-
21-30: 1.2
31-40: 1.8
41-50: 1.9
and so on (there's no real pattern)
The program needs to take an age as input and then return the weighting (e.g. if age = 35, the return value would be 1.8.
How would this be best achieved?
I could use switch but I'm not sure if it's the best way around this. Is there some other construct or technique I could apply in C# to achieve this that would be more effective and portable/scalable should the weightings change?
One other thing - I can't use a database to store any weightings data - just adding this for info.
Create a Dictionary to define your age ranges and weights. The key should be a Tuple with the min-age and max-age for this range and the value should be your weight:
Dictionary<Tuple<int,int>, double> // minAge, MaxAge -> weight
Then you may loop through all keys to find your weight.
You may create this dictionary from the contents of a table, a XML file, a database, whatever.
We have done something similar in a system here, and we use the concept of storing in a database table the weighting and the lower threshold. Thus all we need to do is to find the record with the highest lower threshold less than the value entered and read the weight.
This simplifies the process and allows for editing and adding and removing the values.
No, as far as I know, there is nothing like a range-structure.
You could use a switch, either in this way, if the ranges are always from x1 to y0
switch((value-1) / 10)
{
case 1: ... break;
case 2: ... break;
}
or, if needed:
switch(value)
{
case 11:
case 12:
case 20: ... break;
case 21: ...
}
Depending on the number of groups you need, you could do checks like
if(value > 10 && <= 20) ...
I don't know any more elegant approach.
If the ranges do not overlap then the best thing to use would be a SortedList where the key is the upper value of the range, and the value is the weight. Additionally you can make the weigth nullable to distinguish the case of not finding an entry. I've added the entry of {20, null} so that if the age is <= 20 you'll get null instead of 1.2.
var rangedWeights = new SortedList<int, float?>
{
{ 20, null }
{ 30, 1.2f },
{ 40, 1.8f },
{ 50, 1.9f }
};
int age = 44;
float? weight = null;
foreach (var kvp in rangedWeights)
{
if (age <= kvp.Key)
{
weight = kvp.Value;
break;
}
}
You can dynamically add new entries and still be sure they are sorted.
rangedWeights.Add(60, 2.1f);
You can use dictionary with composite key, which you will use to check the user input and get the respected value for the matching key.
Here is example;
Dictionary<Tuple<int, int>, double> t = new Dictionary<Tuple<int, int>, double>();
t.Add(new Tuple<int,int>(11,20),1f);
t.Add(new Tuple<int, int>(21, 30), 2f);
t.Add(new Tuple<int, int>(31, 40), 3f);
int weight = 34;
double rr = (from d in t where d.Key.Item1 <= weight && d.Key.Item2 >= weight select d.Value).FirstOrDefault();
Console.WriteLine(rr); // rr will print 3.0
Hope it helps..!!
If the ranges are consecutive as they appear to be in your example, you only need the upper value and the ranges sorted in order to be able to query it, so you can do something like this:
public class RangeEntry
{
public RangeEntry(int upperValue, float weight)
{
UpperValue = uperValue;
Weight = weight;
}
public int UpperValue { get; set; }
public float Weight { get; set; }
}
public class RangeWeights
{
private List<RangeEntry> _ranges = new List<RangeEntry>
{
new RangeEntry(30, 1.2f),
new RangeEntry(40, 1.8f),
new RangeEntry(50, 1.9f),
}
public float GetWeight(int value)
{
// If you pre-sort the ranges then you won't need the below OrderBy
foreach (var r in _ranges.OrderBy(o => o.UpperValue))
{
if (value <= r.UpperValue)
return r.Weight;
}
// Range not found, do whatever you want here
throw new InvalidOperationException("value not within in any valid range");
}
}
The value of this approach is that adding a new range means adding just 1 line of code in the instantiation of the ranges.
If you are looking for a shorter way of writing it, I suggest you use ?: operator.
double getWeight(int age){
return (age <= 30) ? 1.2 :
(age <= 40) ? 1.8 : 1.9;
}
It will be the same as using switch. Only shorter way of putting it. You can replace the digits and weights with variables if you don't want them to be hard coded.
It's not very clear what is the structure of the age ranges and weights data,
but I would probably do something like this:
Class AgeRangeAndWeight {
int FromAge {get;set;}
int ToAge {get;set;}
double Weight {get;set;}
}
Class AgeRangeAndWeight Collection() : List<AgeRangeAndWeight> {
AgeRangeAndWeight FindByAge(int age) {
foreach(AgeRangeAndWeight araw in this) {
if(age >= araw.FromAge && age <= araw.ToAge) {
return araw;
}
}
return null;
}
}
then all you have to do is call the FindByAge method. remember to check that it doesn't return null.
Update
Five years after I've posted this answer it was upvoted.
Today I wouldn't recommend inheriting a List<T> - but simply use it's Find method like this:
var list = new List<AgeRangeAndWeight>() {/* populate here */}
var age = 35;
var ageRangeAndWeight = list.Find(a => d.FromAge >= age && d.ToAge <= age);
I know the iterative solution:
given a set of n elements
save an int v = 2^n and generate all binaries number up to this v.
But what if n > 32?
I know it's already 2^32 subsets, but yet - what's the way to bypass the 32 elements limitation?
If you're happy with a 64 item limit, you can simply use long.
Array / ArrayList of ints / longs. Have a next function something like:
bool next(uint[] arr)
for (int i = 0; i < arr.length; i++)
if (arr[i] == 2^n-1) // 11111 -> 00000
arr[i] = 0
else
arr[i]++
return true
return false // reached the end -> there is no next
BitArray. Probably not a very efficient option compared to the above.
You can have a next function which sets the least significant bit 0 to 1 and all remaining bits to 0. e.g.:
10010 -> 10011
10011 -> 10100
Note - this will probably take forever, simply because there's so many subsets, but that's not the question.
You can use #biziclop approach, by propagating the carry bit in the following way: store your number as vector of 32-bit "digits" of length K. So, you can generate 2^(K*32) subsets, and every increment operation will take at most O(K) operations.
The other thing that I can think of is recursive backtrack, that will generate all subsets in one array.
You could write an analog of this concise Haskell implementation:
powerSet = filterM (const [True, False])
Except there is no built-in filterM in C#. That's no problem, you can implement it yourself.
Here is my attempt at it:
public static IEnumerable<IEnumerable<T>> PowerSet<T>(IEnumerable<T> els)
{
return FilterM(_ => new[] {true, false}, els);
}
public static IEnumerable<IEnumerable<T>> FilterM<T>(
Func<T, IEnumerable<bool>> p,
IEnumerable<T> els)
{
var en = els.GetEnumerator();
if (!en.MoveNext())
{
yield return Enumerable.Empty<T>();
yield break;
}
T el = en.Current;
IEnumerable<T> tail = els.Skip(1);
foreach (var x in
from flg in p(el)
from ys in FilterM(p, tail)
select flg ? new[] { el }.Concat(ys) : ys)
{
yield return x;
}
}
And then you can use it like this:
foreach (IEnumerable<int> subset in PowerSet(new [] { 1, 2, 3, 4 }))
{
Console.WriteLine("'{0}'", string.Join(",", subset));
}
As you can see, neither int nor long are explicitly used anywhere in the implementation, so the real limit here is the maximum recursion depth reachable with the current stack size limit.
UPD: Rosetta Code gives a non-recursive implementation:
public static IEnumerable<IEnumerable<T>> GetPowerSet<T>(IEnumerable<T> input)
{
var seed = new List<IEnumerable<T>>() { Enumerable.Empty<T>() }
as IEnumerable<IEnumerable<T>>;
return input.Aggregate(seed, (a, b) =>
a.Concat(a.Select(x => x.Concat(new List<T> { b }))));
}
I have an IEnumerable<Point> collection. Lets say it contains 5 points (in reality it is more like 2000)
I want to order this collection so that a specifc point in the collection becomes the first element, so it's basically chopping a collection at a specific point and rejoining them together.
So my list of 5 points:
{0,0}, {10,0}, {10,10}, {5,5}, {0,10}
Reordered with respect to element at index 3 would become:
{5,5}, {0,10}, {0,0}, {10,0}, {10,10}
What is the most computationally efficient way of resolving this problem, or is there an inbuilt method that already exists... If so I can't seem to find one!
var list = new[] { 1, 2, 3, 4, 5 };
var rotated = list.Skip(3).Concat(list.Take(3));
// rotated is now {4, 5, 1, 2, 3}
A simple array copy is O(n) in this case, which should be good enough for almost all real-world purposes. However, I will grant you that in certain cases - if this is a part deep inside a multi-level algorithm - this may be relevant. Also, do you simply need to iterate through this collection in an ordered fashion or create a copy?
Linked lists are very easy to reorganize like this, although accessing random elements will be more costly. Overall, the computational efficiency will also depend on how exactly you access this collection of items (and also, what sort of items they are - value types or reference types?).
The standard .NET linked list does not seem to support such manual manipulation but in general, if you have a linked list, you can easily move around sections of the list in the way you describe, just by assigning new "next" and "previous" pointers to the endpoints.
The collection library available here supports this functionality: http://www.itu.dk/research/c5/.
Specifically, you are looking for LinkedList<T>.Slide() the method which you can use on the object returned by LinkedList<T>.View().
Version without enumerating list two times, but higher memory consumption because of the T[]:
public static IEnumerable<T> Rotate<T>(IEnumerable<T> source, int count)
{
int i = 0;
T[] temp = new T[count];
foreach (var item in source)
{
if (i < count)
{
temp[i] = item;
}
else
{
yield return item;
}
i++;
}
foreach (var item in temp)
{
yield return item;
}
}
[Test]
public void TestRotate()
{
var list = new[] { 1, 2, 3, 4, 5 };
var rotated = Rotate(list, 3);
Assert.That(rotated, Is.EqualTo(new[] { 4, 5, 1, 2, 3 }));
}
Note: Add argument checks.
Another alternative to the Linq method shown by ulrichb would be to use the Queue Class (a fifo collection) dequeue to your index, and enqueue the ones you have taken out.
The naive implementation using linq would be:
IEnumerable x = new[] { 1, 2, 3, 4 };
var tail = x.TakeWhile(i => i != 3);
var head = x.SkipWhile(i => i != 3);
var combined = head.Concat(tail); // is now 3, 4, 1, 2
What happens here is that you perform twice the comparisons needed to get to your first element in the combined sequence.
The solution is readable and compact but not very efficient.
The solutions described by the other contributors may be more efficient since they use special data structures as arrays or lists.
You can write a user defined extension of List that does the rotation by using List.Reverse(). I took the basic idea from the C++ Standard Template Library which basically uses Reverse in three steps: Reverse(first, mid) Reverse(mid, last) Reverse(first, last)
As far as I know, this is the most efficient and fastest way. I tested with 1 billion elements and the rotation Rotate(0, 50000, 800000) takes 0.00097 seconds. (By the way: adding 1 billion ints to the List already takes 7.3 seconds)
Here's the extension you can use:
public static class Extensions
{
public static void Rotate(this List<int> me, int first, int mid, int last)
{
//indexes are zero based!
if (first >= mid || mid >= lastIndex)
return;
me.Reverse(first, mid - first + 1);
me.Reverse(mid + 1, last - mid);
me.Reverse(first, last - first + 1);
}
}
The usage is like:
static void Main(string[] args)
{
List<int> iList = new List<int>{0,1,2,3,4,5};
Console.WriteLine("Before rotate:");
foreach (var item in iList)
{
Console.Write(item + " ");
}
Console.WriteLine();
int firstIndex = 0, midIndex = 2, lastIndex = 4;
iList.Rotate(firstIndex, midIndex, lastIndex);
Console.WriteLine($"After rotate {firstIndex}, {midIndex}, {lastIndex}:");
foreach (var item in iList)
{
Console.Write(item + " ");
}
Console.ReadKey();
}
I have an ArrayList of MCommand objects (cmdList) and I want to sort it so that shapes with closest points are next to each other in the ArrayList. For example, say I have three lines in the ArrayList:
line(xs, ys, zs, xe, ye, ze)
cmdList[0] = line1(1.3, 2.5, 3, 4, 5, 6)
cmdList[1] = line2(1, 5, 6.77, 7, 8, 2)
cmdList[2] = line3(1, 6, 3, 1, 1.1, 1)
Points that need to be close are LastPosition of line with BeginPosition of other line.
LastPosition of line is (xe, ye, ze) and BeginPosition of line is (xs, ys, zs).
I now do my sorting by executing a built in sorting:
cmdList.Sort(new MCommandComparer());
This is how my MCommand looks like and how i calculate distance of two points:
public abstract class MCommand
{
//...
public abstract Point3 LastPosition { get; }
public abstract Point3 BeginPosition { get; }
public double CompareTo(Object obj)
{
Point3 p1, p2;
p1 = this.BeginPosition;
p2 = ((MCommand)obj).LastPosition;
return Math.Sqrt(Math.Pow((p2.x - p1.x), 2) +
Math.Pow((p2.y - p1.y), 2) +
Math.Pow((p2.z - p1.z), 2));
}
}
This is the comparer i use:
public class MCommandComparer : IComparer
{
private MCommand prev;
double distanceFromPrev = 0;
double distanceFromCurr = 0;
public int Compare(object o1, object o2)
{
if ((MCommand)o2 == prev)
return 0;
if (prev != null)
distanceFromPrev = ((MCommand)o1).CompareTo(prev);
distanceFromCurr = ((MCommand)o1).CompareTo(o2);
prev = (MCommand)o2;
return (int)(distanceFromCurr - distanceFromPrev);
}
}
I've tried many ways and got lost... This doesnt sort shapes the way I want to. My question is, what I could be doing wrong? Should i try writing sorting from scratch? My ArrayList can contain couple thousands elements, and i need to have an efficient sort.
What could I be doing wrong?
You're assuming the elements will be presented to you in a particular order - you're remembering the "previous" element, which is a huge red flag.
The way various sorts work won't do this at all. Basically your comparer should be stateless. It sounds like you don't really have a total ordering here - there's no way of taking any two arbitrary elements and saying which should be before or after the other one.
I don't know exactly how you'd do whatever you need, but I don't think the standard sorting built into .NET is going to help you much.
You could make your MCommand class subscribe to IComparable. In doing this you would allow your list to sort your shapes without the need for additional Comparer Object. All the sorting functionality would be handled by the list and the objects within it.
I have the following code:
foreach (Tuple<Point, Point> pair in pointsCollection)
{
var points = new List<Point>()
{
pair.Value1,
pair.Value2
};
}
Within this foreach, I would like to be able to determine which pair of points has the most significant length between the coordinates for each point within the pair.
So, let's say that points are made up of the following pairs:
(1) var points = new List<Point>()
{
new Point(0,100),
new Point(100,100)
};
(2) var points = new List<Point>()
{
new Point(150,100),
new Point(200,100)
};
So I have two sets of pairs, mentioned above. They both will plot a horizontal line. I am interested in knowing what the best approach would be to find the pair of points that have the greatest distance between, them, whether it is vertically or horizontally. In the two examples above, the first pair of points has a difference of 100 between the X coordinate, so that would be the point with the most significant difference. But if I have a collection of pairs of points, where some points will plot a vertical line, some points will plot a horizontal line, what would be the best approach for retrieving the pair from the set of points whose difference, again vertically or horizontally, is the greatest among all of the points in the collection?
Thanks!
Chris
Use OrderBy to create an ordering based on your criteria, then select the first one. In this case order by the maximum absolute difference between the horizontal and vertical components in descending order.
EDIT: Actually, I think you should be doing this on the Tuples themselves, right? I'll work on adapting the example to that.
First, let's add an extension for Tuple<Point,Point> to calculate it's length.
public static class TupleExtensions
{
public static double Length( this Tuple<Point,Point> tuple )
{
var first = tuple.Item1;
var second = tuple.Item2;
double deltaX = first.X - second.X;
double deltaY = first.y - second.Y;
return Math.Sqrt( deltaX * deltaX + deltaY * deltaY );
}
}
Now we can order the tuples by their length
var max = pointCollection.OrderByDescending( t => t.Length() )
.FirstOrDefault();
Note: it is faster to just iterate over the collection and keep track of the maximum rather than sorting/selecting with LINQ.
Tuple<Point,Point> max = null;
foreach (var tuple in pointCollection)
{
if (max == null || tuple.Length() > max.Length())
{
max = tuple;
}
}
Obviously, this could be refactored to an IEnumerable extension if you used it in more than one place.
You'll need a function probably using the pythagorean theorem to calculate the distances
a^2 + b^2 = c^2
Where a would be the difference in Point.X, b would be the difference in Point.Y, and c would be your distance. And once that function has been written, then you can go to LINQ and order on the results.
Here's what I did. (Note: I do not have C# 4, so it's not apples to apples
private double GetDistance(Point a, Point b)
{
return Math.Pow(Math.Pow(Math.Abs(a.X - b.X), 2) + Math.Pow(Math.Abs(a.Y - b.Y), 2), 0.5);
}
You can turn that into an anonymous method or Func if you prefer, obviously.
var query = pointlistCollection.OrderByDescending(pair => GetDistance(pair[0], pair[1])).First();
Where pointlistCollection is a List<List<Point>>, each inner list having two items. Quick example, but it works.
List<List<Point>> pointlistCollection
= new List<List<Point>>()
{
new List<Point>() { new Point(0,0), new Point(3,4)},
new List<Point>() { new Point(5,5), new Point (3,7)}
};
***Here is my GetDistance function in Func form.
Func<Point, Point, double> getDistance
= (a, b)
=> Math.Pow(Math.Pow(Math.Abs(a.X - b.X), 2) + Math.Pow(Math.Abs(a.Y - b.Y), 2), 0.5);
var query = pointlistCollection.OrderByDescending(pair => getDistance(pair[0], pair[1])).First();
As commented above: Don't sort the list in order to get a maximum.
public static double Norm(Point x, Point y)
{
return Math.Sqrt(Math.Pow(x.X - y.X, 2) + Math.Pow(x.Y - y.Y, 2));
}
Max() needs only O(n) instead of O(n*log n)
pointsCollection.Max(t => Norm(t.Item1, t.Item2));
tvanfosson's answer is good, however I would like to suggest a slight improvement : you don't actually need to sort the collection to find the max, you just have to enumerate the collection and keep track of the maximum value. Since it's a very common scenario, I wrote an extension method to handle it :
public static class EnumerableExtensions
{
public static T WithMax<T, TValue>(this IEnumerable<T> source, Func<T, TValue> selector)
{
var max = default(TValue);
var withMax = default(T);
bool first = true;
foreach (var item in source)
{
var value = selector(item);
int compare = Comparer<TValue>.Default.Compare(value, max);
if (compare > 0 || first)
{
max = value;
withMax = item;
}
first = false;
}
return withMax;
}
}
You can then do something like that :
Tuple<Point, Point> max = pointCollection.WithMax(t => t.Length());