Case insensitive group on multiple columns - c#

Is there anyway to do a LINQ2SQL query doing something similar to this:
var result = source.GroupBy(a => new { a.Column1, a.Column2 });
or
var result = from s in source
group s by new { s.Column1, s.Column2 } into c
select new { Column1 = c.Key.Column1, Column2 = c.Key.Column2 };
but with ignoring the case of the contents of the grouped columns?

You can pass StringComparer.InvariantCultureIgnoreCase to the GroupBy extension method.
var result = source.GroupBy(a => new { a.Column1, a.Column2 },
StringComparer.InvariantCultureIgnoreCase);
Or you can use ToUpperInvariant on each field as suggested by Hamlet Hakobyan on comment. I recommend ToUpperInvariant or ToUpper rather than ToLower or ToLowerInvariant because it is optimized for programmatic comparison purpose.

I couldn't get NaveenBhat's solution to work, getting a compile error:
The type arguments for method
'System.Linq.Enumerable.GroupBy(System.Collections.Generic.IEnumerable,
System.Func,
System.Collections.Generic.IEqualityComparer)' cannot be
inferred from the usage. Try specifying the type arguments explicitly.
To make it work, I found it easiest and clearest to define a new class to store my key columns (GroupKey), then a separate class that implements IEqualityComparer (KeyComparer). I can then call
var result= source.GroupBy(r => new GroupKey(r), new KeyComparer());
The KeyComparer class does compare the strings with the InvariantCultureIgnoreCase comparer, so kudos to NaveenBhat for pointing me in the right direction.
Simplified versions of my classes:
private class GroupKey
{
public string Column1{ get; set; }
public string Column2{ get; set; }
public GroupKey(SourceObject r) {
this.Column1 = r.Column1;
this.Column2 = r.Column2;
}
}
private class KeyComparer: IEqualityComparer<GroupKey>
{
bool IEqualityComparer<GroupKey>.Equals(GroupKey x, GroupKey y)
{
if (!x.Column1.Equals(y.Column1,StringComparer.InvariantCultureIgnoreCase) return false;
if (!x.Column2.Equals(y.Column2,StringComparer.InvariantCultureIgnoreCase) return false;
return true;
//my actual code is more complex than this, more columns to compare
//and handles null strings, but you get the idea.
}
int IEqualityComparer<GroupKey>.GetHashCode(GroupKey obj)
{
return 0.GetHashCode() ; // forces calling Equals
//Note, it would be more efficient to do something like
//string hcode = Column1.ToLower() + Column2.ToLower();
//return hcode.GetHashCode();
//but my object is more complex than this simplified example
}
}

I had the same issue grouping by the values of DataRow objects from a Table, but I just used .ToString() on the DataRow object to get past the compiler issue, e.g.
MyTable.AsEnumerable().GroupBy(
dataRow => dataRow["Value"].ToString(),
StringComparer.InvariantCultureIgnoreCase)
instead of
MyTable.AsEnumerable().GroupBy(
dataRow => dataRow["Value"],
StringComparer.InvariantCultureIgnoreCase)

I've expanded on Bill B's answer to make things a little more dynamic and to avoid hardcoding the column properties in the GroupKey and IQualityComparer<>.
private class GroupKey
{
public List<string> Columns { get; } = new List<string>();
public GroupKey(params string[] columns)
{
foreach (var column in columns)
{
// Using 'ToUpperInvariant()' if user calls Distinct() after
// the grouping, matching strings with a different case will
// be dropped and not duplicated
Columns.Add(column.ToUpperInvariant());
}
}
}
private class KeyComparer : IEqualityComparer<GroupKey>
{
bool IEqualityComparer<GroupKey>.Equals(GroupKey x, GroupKey y)
{
for (var i = 0; i < x.Columns.Count; i++)
{
if (!x.Columns[i].Equals(y.Columns[i], StringComparison.OrdinalIgnoreCase)) return false;
}
return true;
}
int IEqualityComparer<GroupKey>.GetHashCode(GroupKey obj)
{
var hashcode = obj.Columns[0].GetHashCode();
for (var i = 1; i < obj.Columns.Count; i++)
{
var column = obj.Columns[i];
// *397 is normally generated by ReSharper to create more unique hash values
// So I added it here
// (do keep in mind that multiplying each hash code by the same prime is more prone to hash collisions than using a different prime initially)
hashcode = (hashcode * 397) ^ (column != null ? column.GetHashCode() : 0);
}
return hashcode;
}
}
Usage:
var result = source.GroupBy(r => new GroupKey(r.Column1, r.Column2, r.Column3), new KeyComparer());
This way, you can pass any number of columns into the GroupKey constructor.

Related

How to compare two csv files by 2 columns?

I have 2 csv files
1.csv
spain;russia;japan
italy;russia;france
2.csv
spain;russia;japan
india;iran;pakistan
I read both files and add data to lists
var lst1= File.ReadAllLines("1.csv").ToList();
var lst2= File.ReadAllLines("2.csv").ToList();
Then I find all unique strings from both lists and add it to result lists
var rezList = lst1.Except(lst2).Union(lst2.Except(lst1)).ToList();
rezlist contains this data
[0] = "italy;russia;france"
[1] = "india;iran;pakistan"
At now I want to compare, make except and union by second and third column in all rows.
1.csv
spain;russia;japan
italy;russia;france
2.csv
spain;russia;japan
india;iran;pakistan
I think I need to split all rows by symbol ';' and make all 3 operations (except, distinct and union) but cannot understand how.
rezlist must contains
india;iran;pakistan
I added class
class StringLengthEqualityComparer : IEqualityComparer<string>
{
public bool Equals(string x, string y)
{
...
}
public int GetHashCode(string obj)
{
...
}
}
StringLengthEqualityComparer stringLengthComparer = new StringLengthEqualityComparer();
var rezList = lst1.Except(lst2,stringLengthComparer ).Union(lst2.Except(lst1,stringLengthComparer),stringLengthComparer).ToList();
Your question is not very clear: for instance, is india;iran;pakistan the desired result primarily because russia is at element[1]? Isn't it also included because element [2] pakistan does not match france and japan? Even though thats unclear, I assume the desired result comes from either situation.
Then there is this: find all unique string from both lists which changes the nature dramatically. So, I take it that the desired results are because "iran" appears in column[1] no where else in column[1] in either file and even if it did, that row would still be unique due to "pakistan" in col[2].
Also note that a data sample of 2 leaves room for a fair amount of error.
Trying to do it in one step makes it very confusing. Since eliminating dupes found in 1.CSV is pretty easy, do it first:
// parse "1.CSV"
List<string[]> lst1 = File.ReadAllLines(#"C:\Temp\1.csv").
Select(line => line.Split(';')).
ToList();
// parse "2.CSV"
List<string[]> lst2 = File.ReadAllLines(#"C:\Temp\2.csv").
Select(line => line.Split(';')).
ToList();
// extracting once speeds things up in the next step
// and leaves open the possibility of iterating in a method
List<List<string>> tgts = new List<List<string>>();
tgts.Add(lst1.Select(z => z[1]).Distinct().ToList());
tgts.Add(lst1.Select(z => z[2]).Distinct().ToList());
var tmpLst = lst2.Where(x => !tgts[0].Contains(x[1]) ||
!tgts[1].Contains(x[2])).
ToList();
That results in the items which are not in 1.CSV (no matching text in Col[1] nor Col[2]). If that is really all you need, you are done.
Getting unique rows within 2.CSV is trickier because you have to actually count the number of times each Col[1] item occurs to see if it is unique; then repeat for Col[2]. This uses GroupBy:
var unique = tmpLst.
GroupBy(g => g[1], (key, values) =>
new GroupItem(key,
values.ToArray()[0],
values.Count())
).Where(q => q.Count == 1).
GroupBy(g => g.Data[2], (key, values) => new
{
Item = string.Join(";", values.ToArray()[0]),
Count = values.Count()
}
).Where(q => q.Count == 1).Select(s => s.Item).
ToList();
The GroupItem class is trivial:
class GroupItem
{
public string Item { set; get; } // debug aide
public string[] Data { set; get; }
public int Count { set; get; }
public GroupItem(string n, string[] d, int c)
{
Item = n;
Data = d;
Count = c;
}
public override string ToString()
{
return string.Join(";", Data);
}
}
It starts with tmpList, gets the rows with a unique element at [1]. It uses a class for storage since at this point we need the array data for further review.
The second GroupBy acts on those results, this time looking at col[2]. Finally, it selects the joined string data.
Results
Using 50,000 random items in File1 (1.3 MB), 15,000 in File2 (390 kb). There were no naturally occurring unique items, so I manually made 8 unique in 2.CSV and copied 2 of them into 1.CSV. The copies in 1.CSV should eliminate 2 if the 8 unique rows in 2.CSV making the expected result 6 unique rows:
NepalX and ItalyX were the repeats in both files and they correctly eliminated each other.
With each step it is scanning and working with less and less data, which seems to make it pretty fast for 65,000 rows / 130,000 data elements.
your GetHashCode()-Method in EqualityComparer are buggy. Fixed version:
public int GetHashCode(string obj)
{
return obj.Split(';')[1].GetHashCode();
}
now the result are correct:
// one result: "india;iran;pakistan"
btw. "StringLengthEqualityComparer"is not a good name ;-)
private void GetUnion(List<string> lst1, List<string> lst2)
{
List<string> lstUnion = new List<string>();
foreach (string value in lst1)
{
string valueColumn1 = value.Split(';')[0];
string valueColumn2 = value.Split(';')[1];
string valueColumn3 = value.Split(';')[2];
string result = lst2.FirstOrDefault(s => s.Contains(";" + valueColumn2 + ";" + valueColumn3));
if (result != null)
{
if (!lstUnion.Contains(result))
{
lstUnion.Add(result);
}
}
}
}
class Program
{
static void Main(string[] args)
{
var lst1 = File.ReadLines(#"D:\test\1.csv").Select(x => new StringWrapper(x)).ToList();
var lst2 = File.ReadLines(#"D:\test\2.csv").Select(x => new StringWrapper(x));
var set = new HashSet<StringWrapper>(lst1);
set.SymmetricExceptWith(lst2);
foreach (var x in set)
{
Console.WriteLine(x.Value);
}
}
}
struct StringWrapper : IEquatable<StringWrapper>
{
public string Value { get; }
private readonly string _comparand0;
private readonly string _comparand14;
public StringWrapper(string value)
{
Value = value;
var split = value.Split(';');
_comparand0 = split[0];
_comparand14 = split[14];
}
public bool Equals(StringWrapper other)
{
return string.Equals(_comparand0, other._comparand0, StringComparison.OrdinalIgnoreCase)
&& string.Equals(_comparand14, other._comparand14, StringComparison.OrdinalIgnoreCase);
}
public override bool Equals(object obj)
{
if (ReferenceEquals(null, obj)) return false;
return obj is StringWrapper && Equals((StringWrapper) obj);
}
public override int GetHashCode()
{
unchecked
{
return ((_comparand0 != null ? StringComparer.OrdinalIgnoreCase.GetHashCode(_comparand0) : 0)*397)
^ (_comparand14 != null ? StringComparer.OrdinalIgnoreCase.GetHashCode(_comparand14) : 0);
}
}
}

How to dynamically GroupBy using Linq

There are several similar sounding posts, but none that do exactly what I want.
Okay, so imagine that I have the following data structure (simplified for this LinqPad example)
public class Row
{
public List<string> Columns { get; set; }
}
public List<Row> Data
=> new List<Row>
{
new Row { Columns = new List<string>{ "A","C","Field3"}},
new Row { Columns = new List<string>{ "A","D","Field3"}},
new Row { Columns = new List<string>{ "A","C","Field3"}},
new Row { Columns = new List<string>{ "B","D","Field3"}},
new Row { Columns = new List<string>{ "B","C","Field3"}},
new Row { Columns = new List<string>{ "B","D","Field3"}},
};
For the property "Data", the user will tell me which column ordinals to GroupBy; they may say "don't group by anything", or they may say "group by Column[1]" or "group by Column[0] and Column[1]".
If I want to group by a single column, I can use:
var groups = Data.GroupBy(d => d.Columns[i]);
And if I want to group by 2 columns, I can use:
var groups = Data.GroupBy(d => new { A = d.Columns[i1], B = d.Columns[i2] });
However, the number of columns is variable (zero -> many); Data could contain hundreds of columns and the user may want to GroupBy dozens of columns.
So the question is, how can I create this GroupBy at runtime (dynamically)?
Thanks
Griff
With that Row data structure what are you asking for is relatively easy.
Start by implementing a custom IEqualityComparer<IEnumerable<string>>:
public class ColumnEqualityComparer : EqualityComparer<IEnumerable<string>>
{
public static readonly ColumnEqualityComparer Instance = new ColumnEqualityComparer();
private ColumnEqualityComparer() { }
public override int GetHashCode(IEnumerable<string> obj)
{
if (obj == null) return 0;
// You can implement better hash function
int hashCode = 0;
foreach (var item in obj)
hashCode ^= item != null ? item.GetHashCode() : 0;
return hashCode;
}
public override bool Equals(IEnumerable<string> x, IEnumerable<string> y)
{
if (x == y) return true;
if (x == null || y == null) return false;
return x.SequenceEqual(y);
}
}
Now you can have a method like this:
public IEnumerable<IGrouping<IEnumerable<string>, Row>> GroupData(IEnumerable<int> columnIndexes = null)
{
if (columnIndexes == null) columnIndexes = Enumerable.Empty<int>();
return Data.GroupBy(r => columnIndexes.Select(c => r.Columns[c]), ColumnEqualityComparer.Instance);
}
Note the grouping Key type is IEnumerable<string> and contains the selected row values specified by the columnIndexes parameter, that's why we needed a custom equality comparer (otherwise they will be compared by reference, which doesn't produce the required behavior).
For instance, to group by columns 0 and 2 you could use something like this:
var result = GroupData(new [] { 0, 2 });
Passing null or empty columnIndexes will effectively produce single group, i.e. no grouping.
you can use a Recursive function for create dynamic lambdaExpression. but you must define columns HardCode in the function.

Compare a set of three strings with another

I am making a list of unique "set of 3 strings" from some data, in a way that if the 3 strings come together they become a set, and I can only have unique sets in my list.
A,B,C
B,C,D
D,E,F and so on
And I keep adding sets to the list if they do not exist in the list already, so that if I encounter these three strings together {A,B,C} I wont put it in the list again. So I have 2 questions. And the answer to second one actually depends on the answer of the first one.
How to store this set of 3 string, use List or array or concatenate them or anything else? (I may add it to a Dictionary to record their count as well but that's for later)
How to compare a set of 3 strings with another, irrespective of their order, obviously depending on the structure used? I want to know a proper solution to this rather than doing everything naively!
I am using C# by the way.
Either an array or a list is your best bet for storing the data, since as wentimo mentioned in a comment, concatenating them means that you are losing data that you may need. To steal his example, "ab" "cd "ef" concatenated together is the same as "abcd" "e" and "f" concatenated, but shouldn't be treated as equivalent sets.
To compare them, I would order the list alphabetically, then compare each value in order. That takes care of the fact that the order of the values doesn't matter.
A pseudocode example might look like this:
Compare(List<string> a, List<string> b)
{
a.Sort();
b.Sort();
if(a.Length == b.Length)
{
for(int i = 0; i < a.Length; i++)
{
if(a[i] != b[i])
{
return false;
}
}
return true;
}
else
{
return false;
}
}
Update
Now that you stated in a comment that performance is an imporatant consideration since you may have millions of these sets to compare and that you won't have duplicate elements in a set, here is a more optimized version of my code, note that I no longer have to sort the two lists, which will save quite a bit of time in executing this function.
Compare(List<string> a, List<string> b)
{
if(a.Length == b.Length)
{
for(int i = 0; i < a.Length; i++)
{
if(!b.Contains(a[i]))
{
return false;
}
}
return true;
}
else
{
return false;
}
}
DrewJordan's approach of using a hashtable is still probably than my approach, since it just has to sort each set of three and then can do the comparison to your existing sets much faster than my approach can.
Probably the best way is to use a HashSet, if you don't need to have duplicate elements in your sets. It sounds like each set of 3 has 3 unique elements; if that is actually the case, I would combine a HashSet approach with the concatenation that you already worked out, i.e. order the elements, combine with some separator, and then add the concatenated elements to a HashSet which will prevent duplicates from ever occuring in the first place.
If your sets of three could have duplicate elements, then Kevin's approach is what you're going to have to do for each. You might get some better performance from using a list of HashSets for each set of three, but with only three elements the overhead of creating a hash for each element of potentially millions of sets seems like it would perform worse then just iterating over them once.
here is a simple string-wrapper for you:
/// The wrapper for three strings
public class StringTriplet
{
private List<string> Store;
// accessors to three source strings:
public string A { get; private set; }
public string B { get; private set; }
public string C { get; private set; }
// constructor (need to feel internal storage)
public StringTriplet(string a, string b, string c)
{
this.Store = new List<string>();
this.Store.Add(a);
this.Store.Add(b);
this.Store.Add(c);
// sort is reqiured, cause later we don't want to compare all strings each other
this.Store.Sort();
this.A = a;
this.B = b;
this.C = c;
}
// additional method. you could add IComparable declaration to the entire class, but it is not necessary in your task...
public int CompareTo(StringTriplet obj)
{
if (null == obj)
return -1;
int cmp;
cmp = this.Store.Count.CompareTo(obj.Store.Count);
if (0 != cmp)
return cmp;
for (int i = 0; i < this.Store.Count; i++)
{
if (null == this.Store[i])
return 1;
cmp = this.Store[i].CompareTo(obj.Store[i]);
if ( 0 != cmp )
return cmp;
}
return 0;
}
// additional method. it is a good practice : override both 'Equals' and 'GetHashCode'. See below..
override public bool Equals(object obj)
{
if (! (obj is StringTriplet))
return false;
var t = obj as StringTriplet;
return ( 0 == this.CompareTo(t));
}
// necessary method . it will be implicitly used on adding values to the HashSet
public override int GetHashCode()
{
int res = 0;
for (int i = 0; i < this.Store.Count; i++)
res = res ^ (null == this.Store[i] ? 0 : this.Store[i].GetHashCode()) ^ i;
return res;
}
}
Now you could just create hashset and add values:
var t = new HashSet<StringTriplet> ();
t.Add (new StringTriplet ("a", "b", "c"));
t.Add (new StringTriplet ("a", "b1", "c"));
t.Add (new StringTriplet ("a", "b", "c")); // dup
t.Add (new StringTriplet ("a", "c", "b")); // dup
t.Add (new StringTriplet ("1", "2", "3"));
t.Add (new StringTriplet ("1", "2", "4"));
t.Add (new StringTriplet ("3", "2", "1"));
foreach (var s in t) {
Console.WriteLine (s.A + " " + s.B + " " + s.C);
}
return 0;
You can inherit from List<String> and override Equals() and GetHashCode() methods:
public class StringList : List<String>
{
public override bool Equals(object obj)
{
StringList other = obj as StringList;
if (other == null) return false;
return this.All(x => other.Contains(x));
}
public override int GetHashCode()
{
unchecked
{
int hash = 19;
foreach (String s in this)
{
hash = hash + s.GetHashCode() * 31;
}
return hash;
}
}
}
Now, you can use HashSet<StringList> to store only unique sets

LINQ (or something else) to compare a pair of values from two lists (in any order)?

Basically, I have two IEnumerable<FooClass>s where each FooClass instance contains 2 properties: FirstName, LastName.
The instances on each of the enumerables is NOT the same. Instead, I need to check against the properties on each of the instances. I'm not sure of the most efficient way to do this, but basically I need to make sure that both lists contain similar data (not the same instance, but the same values on the properties). I don't have access to the FooClass itself to modify it.
I should say that the FooClass is a type of Attribute class, which has access to the Attribute.Match() method, so I don't need to check each properties individually.
Based on the comments, I've updated the question to be more specific and changed it slightly... This is what I have so far:
public void Foo()
{
var info = typeof(MyClass);
var attributes = info.GetCustomAttributes(typeof(FooAttribute), false) as IEnumerable<FooAttribute>;
var validateAttributeList = new Collection<FooAttribute>
{
new FooAttribute(typeof(int), typeof(double));
new FooAttribute(typeof(int), typeof(single));
};
//Make sure that the each item in validateAttributeList is contained in
//the attributes list (additional items in the attributes list don't matter).
//I know I can use the Attribute.Match(obj) to compare.
}
Enumerable.SequenceEqual will tell you if the two sequences are identical.
If FooClass has an overridden Equals method that compares the FirstName and LastName, then you should be able to write:
bool equal = List1.SequenceEqual(List2);
If FooClass doesn't have an overridden Equals method, then you need to create an IEqualityComparer<FooClass>:
class FooComparer: IEqualityComparer<FooClass>
{
public bool Equals(FooClass f1, FooClass f2)
{
return (f1.FirstName == f2.FirstName) && (f1.LastName == f2.LastName);
}
public int GetHashCode()
{
return FirstName.GetHashCode() ^ LastName.GetHashCode();
}
}
and then you write:
var comparer = new FooComparer();
bool identical = List1.SequenceEqual(List2, comparer);
You can do in this way:
Define a custom IEqualityComparer<FooAttribute> :
class FooAttributeComparer : IEqualityComparer<FooAttribute>
{
public bool Equals(FooAttribute x, FooAttribute y)
{
return x.Match(y);
}
public int GetHashCode(FooAttribute obj)
{
return 0;
// This makes lookups complexity O(n) but it could be reasonable for small lists
// or if you're not sure about GetHashCode() implementation to do.
// If you want more speed you could return e.g. :
// return obj.Field1.GetHashCode() ^ (17 * obj.Field2.GetHashCode());
}
}
Define an extension method to compare lists in any order and having the same number of equal elements:
public static bool ListContentIsEqualInAnyOrder<T>(
this IEnumerable<T> list1, IEnumerable<T> list2, IEqualityComparer<T> comparer)
{
var lookup1 = list1.ToLookup(x => x, comparer);
var lookup2 = list2.ToLookup(x => x, comparer);
if (lookup1.Count != lookup2.Count)
return false;
return lookup1.All(el1 => lookup2.Contains(el1.Key) &&
lookup2[el1.Key].Count() == el1.Count());
}
Usage example:
static void Main(string[] args)
{
List<FooAttribute> attrs = new List<FooAttribute>
{
new FooAttribute(typeof(int), typeof(double)),
new FooAttribute(typeof(int), typeof(double)),
new FooAttribute(typeof(bool), typeof(float)),
new FooAttribute(typeof(uint), typeof(string)),
};
List<FooAttribute> attrs2 = new List<FooAttribute>
{
new FooAttribute(typeof(uint), typeof(string)),
new FooAttribute(typeof(int), typeof(double)),
new FooAttribute(typeof(int), typeof(double)),
new FooAttribute(typeof(bool), typeof(float)),
};
// this returns true
var listEqual1 = attrs.ListContentIsEqualInAnyOrder(attrs2, new FooAttributeComparer());
// this returns false
attrs2.RemoveAt(1);
var listEqual2 = attrs.ListContentIsEqualInAnyOrder(attrs2, new FooAttributeComparer());
}
Assuming that
The lists both fit in memory and are unsorted
Case doesn't matter
Names don't contain the character "!"
Names do not contain duplicates:
then
var setA = new HashSet<String>(
firstEnumerable.Select(i => i.FirstName.ToUpper() + "!" + i.LastName.ToUpper()));
var setB = new HashSet<String>(
secondEnumerable.Select(i => i.FirstName.ToUpper() + "!" + i.LastName.ToUpper()));
return setA.SetEquals(setB);

Decorate-Sort-Undecorate, how to sort an alphabetic field in descending order

I've got a large set of data for which computing the sort key is fairly expensive. What I'd like to do is use the DSU pattern where I take the rows and compute a sort key. An example:
Qty Name Supplier
Row 1: 50 Widgets IBM
Row 2: 48 Thingies Dell
Row 3: 99 Googaws IBM
To sort by Quantity and Supplier I could have the sort keys: 0050 IBM, 0048 Dell, 0099 IBM. The numbers are right-aligned and the text is left-aligned, everything is padded as needed.
If I need to sort by the Quanty in descending order I can just subtract the value from a constant (say, 10000) to build the sort keys: 9950 IBM, 9952 Dell, 9901 IBM.
How do I quickly/cheaply build a descending key for the alphabetic fields in C#?
[My data is all 8-bit ASCII w/ISO 8859 extension characters.]
Note: In Perl, this could be done by bit-complementing the strings:
$subkey = $string ^ ( "\xFF" x length $string );
Porting this solution straight into C# doesn't work:
subkey = encoding.GetString(encoding.GetBytes(stringval).
Select(x => (byte)(x ^ 0xff)).ToArray());
I suspect because of the differences in the way that strings are handled in C#/Perl. Maybe Perl is sorting in ASCII order and C# is trying to be smart?
Here's a sample piece of code that tries to accomplish this:
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
List<List<string>> sample = new List<List<string>>() {
new List<string>() { "", "apple", "table" },
new List<string>() { "", "apple", "chair" },
new List<string>() { "", "apple", "davenport" },
new List<string>() { "", "orange", "sofa" },
new List<string>() { "", "peach", "bed" },
};
foreach(List<string> line in sample)
{
StringBuilder sb = new StringBuilder();
string key1 = line[1].PadRight(10, ' ');
string key2 = line[2].PadRight(10, ' ');
// Comment the next line to sort desc, desc
key2 = encoding.GetString(encoding.GetBytes(key2).
Select(x => (byte)(x ^ 0xff)).ToArray());
sb.Append(key2);
sb.Append(key1);
line[0] = sb.ToString();
}
List<List<string>> output = sample.OrderBy(p => p[0]).ToList();
return;
You can get to where you want, although I'll admit I don't know whether there's a better overall way.
The problem you have with the straight translation of the Perl method is that .NET simply will not allow you to be so laissez-faire with encoding. However, if as you say your data is all printable ASCII (ie consists of characters with Unicode codepoints in the range 32..127) - note that there is no such thing as '8-bit ASCII' - then you can do this:
key2 = encoding.GetString(encoding.GetBytes(key2).
Select(x => (byte)(32+95-(x-32))).ToArray());
In this expression I have been explicit about what I'm doing:
Take x (which I assume to be in 32..127)
Map the range to 0..95 to make it zero-based
Reverse by subtracting from 95
Add 32 to map back to the printable range
It's not very nice but it does work.
Just write an IComparer that would work as a chain of comparators.
In case of equality on each stage, it should pass eveluation to the next key part. If it's less then, or greater then, just return.
You need something like this:
int comparision = 0;
foreach(i = 0; i < n; i++)
{
comparision = a[i].CompareTo(b[i]) * comparisionSign[i];
if( comparision != 0 )
return comparision;
}
return comparision;
Or even simpler, you can go with:
list.OrderBy(i=>i.ID).ThenBy(i=>i.Name).ThenByDescending(i=>i.Supplier);
The first call return IOrderedEnumerable<>, the which can sort by additional fields.
Answering my own question (but not satisfactorily). To construct a descending alphabetic key I used this code and then appended this subkey to the search key for the object:
if ( reverse )
subkey = encoding.GetString(encoding.GetBytes(subkey)
.Select(x => (byte)(0x80 - x)).ToArray());
rowobj.sortKey.Append(subkey);
Once I had the keys built, I couldn't just do this:
rowobjList.Sort();
Because the default comparator isn't in ASCII order (which my 0x80 - x trick relies on). So then I had to write an IComparable<RowObject> that used the Ordinal sorting:
public int CompareTo(RowObject other)
{
return String.Compare(this.sortKey, other.sortKey,
StringComparison.Ordinal);
}
This seems to work. I'm a little dissatisfied because it feels clunky in C# with the encoding/decoding of the string.
If a key computation is expensive, why compute a key at all? String comparision by itself is not free, it's actually expensive loop through the characters and is not going to perform any better then a custom comparision loop.
In this test custom comparision sort performs about 3 times better then DSU.
Note that DSU key computation is not measured in this test, it's precomputed.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace DSUPatternTest
{
[TestClass]
public class DSUPatternPerformanceTest
{
public class Row
{
public int Qty;
public string Name;
public string Supplier;
public string PrecomputedKey;
public void ComputeKey()
{
// Do not need StringBuilder here, String.Concat does better job internally.
PrecomputedKey =
Qty.ToString().PadLeft(4, '0') + " "
+ Name.PadRight(12, ' ') + " "
+ Supplier.PadRight(12, ' ');
}
public bool Equals(Row other)
{
if (ReferenceEquals(null, other)) return false;
if (ReferenceEquals(this, other)) return true;
return other.Qty == Qty && Equals(other.Name, Name) && Equals(other.Supplier, Supplier);
}
public override bool Equals(object obj)
{
if (ReferenceEquals(null, obj)) return false;
if (ReferenceEquals(this, obj)) return true;
if (obj.GetType() != typeof (Row)) return false;
return Equals((Row) obj);
}
public override int GetHashCode()
{
unchecked
{
int result = Qty;
result = (result*397) ^ (Name != null ? Name.GetHashCode() : 0);
result = (result*397) ^ (Supplier != null ? Supplier.GetHashCode() : 0);
return result;
}
}
}
public class RowComparer : IComparer<Row>
{
public int Compare(Row x, Row y)
{
int comparision;
comparision = x.Qty.CompareTo(y.Qty);
if (comparision != 0) return comparision;
comparision = x.Name.CompareTo(y.Name);
if (comparision != 0) return comparision;
comparision = x.Supplier.CompareTo(y.Supplier);
return comparision;
}
}
[TestMethod]
public void CustomLoopIsFaster()
{
var random = new Random();
var rows = Enumerable.Range(0, 5000).Select(i =>
new Row
{
Qty = (int) (random.NextDouble()*9999),
Name = random.Next().ToString(),
Supplier = random.Next().ToString()
}).ToList();
foreach (var row in rows)
{
row.ComputeKey();
}
var dsuSw = Stopwatch.StartNew();
var sortedByDSU = rows.OrderBy(i => i.PrecomputedKey).ToList();
var dsuTime = dsuSw.ElapsedMilliseconds;
var customSw = Stopwatch.StartNew();
var sortedByCustom = rows.OrderBy(i => i, new RowComparer()).ToList();
var customTime = customSw.ElapsedMilliseconds;
Trace.WriteLine(dsuTime);
Trace.WriteLine(customTime);
CollectionAssert.AreEqual(sortedByDSU, sortedByCustom);
Assert.IsTrue(dsuTime > customTime * 2.5);
}
}
}
If you need to build a sorter dynamically you can use something like this:
var comparerChain = new ComparerChain<Row>()
.By(r => r.Qty, false)
.By(r => r.Name, false)
.By(r => r.Supplier, false);
var sortedByCustom = rows.OrderBy(i => i, comparerChain).ToList();
Here is a sample implementation of comparer chain builder:
public class ComparerChain<T> : IComparer<T>
{
private List<PropComparer<T>> Comparers = new List<PropComparer<T>>();
public int Compare(T x, T y)
{
foreach (var comparer in Comparers)
{
var result = comparer._f(x, y);
if (result != 0)
return result;
}
return 0;
}
public ComparerChain<T> By<Tp>(Func<T,Tp> property, bool descending) where Tp:IComparable<Tp>
{
Comparers.Add(PropComparer<T>.By(property, descending));
return this;
}
}
public class PropComparer<T>
{
public Func<T, T, int> _f;
public static PropComparer<T> By<Tp>(Func<T,Tp> property, bool descending) where Tp:IComparable<Tp>
{
Func<T, T, int> ascendingCompare = (a, b) => property(a).CompareTo(property(b));
Func<T, T, int> descendingCompare = (a, b) => property(b).CompareTo(property(a));
return new PropComparer<T>(descending ? descendingCompare : ascendingCompare);
}
public PropComparer(Func<T, T, int> f)
{
_f = f;
}
}
It works a little bit slower, maybe because of property binging delegate calls.

Categories