I have an Entity Framework 'TrackableCollection' and I want to sort it by an attribute. I have tried treating it like an IEnumerable and calling
TrackableCollection<Something>.OrderBy(q=>q.SomeValue);
but it throws an exception "Cannot implicitly covert type IOrderedEnumerable to TrackableCollection.
Anyone know how to sort a TrackableCollection?
The code example Shiv Kumar refers to does not work - it doesn't compile, and even after you factor things up (like implementing generics in a lot of places), it works but buggy, since the code calls collection.Move which causes a "Index must be within the bounds of the List" exception in certain cases.
The code below works correctly. The coders of STE (Self Tracking Entities) should had implemented that themselves... This is the correct code:
public static class Extensions
{
public static void Sort<T>(this TrackableCollection<T> collection, Comparison<T> comparison)
{
var comparer = new Comparer<T>(comparison);
List<T> sorted = collection.OrderBy(x=>x, comparer) .ToList();
collection.Clear();
for (int i = 0; i < sorted.Count(); i++)
collection.Add(sorted[i]);
}
}
class Comparer<T> : IComparer<T>
{
private Comparison<T> comparison;
public Comparer(Comparison<T> comparison)
{
this.comparison = comparison;
}
public int Compare(T x, T y)
{
return comparison.Invoke(x, y);
}
}
You use this code as in the previous example:
YourTrackableCollectionName.Sort((x, y) => x.YourFieldName.CompareTo(y.YourFieldName));
This is a simplifed version of Ofer Zeligs code that uses a keySelector func (like LINQ OrderBy) instead of an explicit Comparison delegate. Since it only uses Clear() and Add() it can be called on any Collection object (like an ObservableCollection or a TrackableCollection).
public static void Sort<TSource, TKey>(this Collection<TSource> source, Func<TSource, TKey> keySelector)
{
var sorted = source.OrderBy(keySelector).ToList();
source.Clear();
foreach (var item in sorted)
source.Add(item);
}
It's used like this:
list.Sort(person => person.Name);
If you're using this on a TrackableCollection (STE) you might want to make sure you haven't started tracking changes before sorting the list.
Related
Assume I have a collection of the form:
List<Member> myMembers= new List<Member>()
{
new Member(){ID=1,FirstName="John",LastName="Doe"},
new Member(){ID=3,FirstName="Allan",LastName="Jones"},
new Member(){ID=2,FirstName="Martin",LastName="Moe"},
new Member(){ID=4,FirstName="Ludwig",LastName="Issac"}
};
I can sort this list via FirstName by using:
myMembers.Sort((x, y) => x.FirstName.CompareTo(y.FirstName));
I would like to do this inside of a function, so that I can pass the desired search parameter. Something like:
public void sortCollection( parameter SearchTerm, List<Member> myCollection )
{
myCollection ((x, y) => x.SearchTerm.CompareTo(y.FirstName));
}
Obviously here, I cannot pass in the desired search field this way, but is it possible to do what I am asking?
You can create a generic extension method for that and pass Func<T, TResult> delegate to it, which is used as key selector for built-in Sort method
public static void SortExt<T, TValue>(this List<T> collection, Func<T, TValue> selector)
{
var comparer = Comparer<TValue>.Default;
collection.Sort((x, y) => comparer.Compare(selector(x), selector(y)));
}
Keep in mind that you should compare the same fields of compared objects.
Example of the usage
myMembers.SortExt(member => member.FirstName);
If you want to compare the myMembers collection only, the declaration can be simplified
public static void SortExt<TValue>(this List<Member> members, Func<Member, TValue> selector)
{
var comparer = Comparer<TValue>.Default;
members.Sort((x, y) => comparer.Compare(selector(x), selector(y)));
}
Another option is to introduce a Comparison<T> delegate instance to store the logic of field comparison and pass it to the Sort method. Here you can specify any custom comparison that you want
Comparison<Member> comparison = (x, y) => x.FirstName.CompareTo(y.FirstName);
myMembers.Sort(comparison);
Demo on dotnet fiddle
It seems to me that you can use Func<> to be able to dynamic order by.
var result = sortCollection(p => p.ID, myMembers);
public static IEnumerable<Member> sortCollection(Func<Member, object> keySelector, List<Member> myCollection)
{
return myCollection.OrderBy(keySelector);
}
Read the following post to have a better understanding
Dynamic Order By in Linq
Instead of writing one more wrapper on Linq .OrderBy() or .Sort() method and calling that where ever you want to sort, use Linq OrderBy() or .Sort() method
Use Sort() like
//If you want to sort collection by Id then
myMembers.Sort(x => x.Id);
//If you want to sort collection by FirstName then
myMembers.Sort(x => x.FirstName);
...
To decide between OrderBy()/Sort() read below Q&A thread
C# Sort and OrderBy comparison
Why not simply use the built in function?
myMembers.OrderBy(m => m.FirstName)
If you really want to, you can write your own wrapper around this function, but the call would look more or less identical and would kind of be reinventing the wheel.
something like this extension method:
public static IEnumerable<T> OrderByWrapper<T, TKey>(this IEnumerable<T> source, Func<T,TKey> keySelector)
{
return source.OrderBy(keySelector);
}
you would call it with:
myMembers.OrderByWrapper(m => m.FirstName)
Wasn't really sure how to phrase the title.
What I am trying to achieve is a deep clone system for IEnumerable<T>s where T:ICloneable.
I have written the, as-yet untested, method below which I believe should work:
public static IEnumerable<T> DeepClone<T>(this IEnumerable<T> source) where T:ICloneable
{
return source.Select(s => (T) s.Clone());
}
However, this returns an IEnumerable<T> (as one would expect) and I am curious as to whether or not it is possible (without causing an unacceptable overhead) to return the base type of the IEnumerable<T> instead.
For example, running List<int>.DeepClone() would return a new, cloned List<int> and running int[].DeepClone() would return a new, cloned int[].
I know that I can quite easily just cast my IEnumerables after calling this method, but I'm hoping to be able to avoid this.
There is also the option of creating a whole load of overloads, one for each IEnumerable but if it's possible to I'd like to avoid this.
You will need to build explicit methods for the concrete types you want to support (List, arrays etc).
An example:
public static List<T> DeepClone<T>(this List<T> source) where T : ICloneable
{
return source.Select(s => (T)s.Clone()).ToList();
}
Alternatively, use an approach like:
public static IEnumerable<T> DeepClone<T>(this IEnumerable<T> source) where T : ICloneable
{
var result = source.Select(s => (T)s.Clone());
if (source is List<T>)
{
return result.ToList();
}
return result;
}
I'm rewriting an old C# project from scratch trying to figure out how it can be improved with the use of functional design. So far I've stuck with a couple of principles (except where the GUI is concerned):
Set every variable in every class as readonly and assign it a value only once.
Use immutable collections
Don't write code with side effects.
Now I'm trying to create a function which, given a folder, using yield return, enumerates a list of objects, one for each file in the given folder. Each object contains a unique ID, starting from firstAssignedID, and a filename.
Thing is, I'm not sure how to approach the problem at all. Is what I've just described even the right way of thinking about it? My code so far is a half-baked, incomplete mess. Is it possible to use a lambda here? Would that help, or is there a better way?
The FileObject class simply contains a string fileName and an int id, and the FileObject constructor simply and naively creates an instance given those two values.
public IEnumerable<FileObject> EnumerateImagesInPath(string folderPath, int firstAssignedID)
{
foreach (string path in Directory.EnumerateFiles(folderPath)
{
yield return new FileObject(Path.GetFileName(imagePath) , );
}
}
The most functional way of doing what you want is this:
IEnumerable<FileObject> EnumerateImagesInPath(string path, int firstAssignedID) =>
Enumerable.Zip(
Enumerable.Range(firstAssignedID, Int32.MaxValue),
Directory.EnumerateFiles(path),
FileObject.New);
With a FileObject type defined like so:
public class FileObject
{
public readonly int Id;
public readonly string Filename;
FileObject(int id, string fileName)
{
Id = id;
Filename = fileName;
}
public static FileObject New(int id, string fileName) =>
new FileObject(id, fileName);
}
It doesn't use yield, but that doesn't matter because Enumerable.Range and Enumerable.Zip do, so it's a lazy function just like your original example.
I use Enumerable.Range to create a lazy list of integers from firstAssignedId to Int32.MaxValue. This is zipped together with the enumerable of files in the directory. FileObject.New(id. path) is invoked as part of the zip computation.
There is no in-place state modification like the accepted answer (firstAssignedID++), and the whole function can be represented as an expression.
The other way of achieving your goal is to use the fold pattern. It is the most common way of aggregating state in functional programming. This is how to define it for IEnumerable
public static class EnumerableExt
{
public static S Fold<S, T>(this IEnumerable<T> self, S state, Func<S, T, S> folder) =>
self.Any()
? Fold(self.Skip(1), folder(state, self.First()), folder)
: state;
}
You should be able to see that its a recursive function that runs a delegate (folder) on the head of the list if there is one, then uses that as new state when calling recursively calling Fold. If it reaches the end of the list, then the aggregate state is returned.
You may notice that the implementation of EnumerableExt.Fold can blow up the stack in C# (because of a lack of tail-call optimisation). So a better way of implementing the Fold function is to do so imperatively:
public static S Fold<S, T>(this IEnumerable<T> self, S state, Func<S, T, S> folder)
{
foreach(var x in self)
{
state = folder(state, x);
}
return state;
}
There is a dual to Fold known as FoldBack (sometimes they're called 'fold left' and 'fold right'). FoldBack essentially aggregates from the tail of the list to the head, where Fold is from the head to the tail.
public static S FoldBack<S, T>(this IEnumerable<T> self, S state, Func<S, T, S> folder)
{
foreach(var x in self.Reverse()) // Note the Reverse()
{
state = folder(state, x);
}
return state;
}
Fold is so flexible, for example you could implement Count for an enumerable in terms of fold like so:
int Count<T>(this IEnumerable<T> self) =>
self.Fold(0, (state, item) => state + 1);
Or Sum like so:
int Sum<int>(this IEnumerable<int> self) =>
self.Fold(0, (state, item) => state + item);
Or most of the IEnumerable API!
public static bool Any<T>(this IEnumerable<T> self) =>
self.Fold(false, (state, item) => true);
public static bool Exists<T>(this IEnumerable<T> self, Func<T, bool> predicate) =>
self.Fold(false, (state, item) => state || predicate(item));
public static bool ForAll<T>(this IEnumerable<T> self, Func<T, bool> predicate) =>
self.Fold(true, (state, item) => state && predicate(item));
public static IEnumerable<R> Select<T, R>(this IEnumerable<T> self, Func<T, R> map) =>
self.FoldBack(Enumerable.Empty<R>(), (state, item) => map(item).Cons(state));
public static IEnumerable<T> Where<T>(this IEnumerable<T> self, Func<T, bool> predicate) =>
self.FoldBack(Enumerable.Empty<T>(), (state, item) =>
predicate(item)
? item.Cons(state)
: state);
It's very powerful, and allows the aggregation of state for a collection (so this allows us to do firstAssignedId++ without an imperative in-place state modification).
Our FileObject example is a little more complex than Count or Sum, because we need to maintain two pieces of state: the aggregate ID and the resulting IEnumerable<FileObject>. So our state is a Tuple<int, IEnumerable<FileObject>>
IEnumerable<FileObject> FoldImagesInPath(string folderPath, int firstAssignedID) =>
Directory.EnumerateFiles(folderPath)
.Fold(
Tuple.Create(firstAssignedID, Enumerable.Empty<FileObject>()),
(state, path) => Tuple.Create(state.Item1 + 1, FileObject.New(state.Item1, path).Cons(state.Item2)))
.Item2;
You can make this even more declarative by providing some extension and static methods for Tuple<int, IEnumerable<FileObject>>:
public static class FileObjectsState
{
// Creates a tuple with state ID of zero (Item1) and an empty FileObject enumerable (Item2)
public static readonly Tuple<int, IEnumerable<FileObject>> Zero =
Tuple.Create(0, Enumerable.Empty<FileObject>());
// Returns a new tuple with the ID (Item1) set to the supplied argument
public static Tuple<int, IEnumerable<FileObject>> SetId(this Tuple<int, IEnumerable<FileObject>> self, int id) =>
Tuple.Create(id, self.Item2);
// Returns the important part of the result, the enumerable of FileObjects
public static IEnumerable<FileObject> Result(this Tuple<int, IEnumerable<FileObject>> self) =>
self.Item2;
// Adds a new path to the aggregate state and increases the ID by one.
public static Tuple<int, IEnumerable<FileObject>> Add(this Tuple<int, IEnumerable<FileObject>> self, string path) =>
Tuple.Create(self.Item1 + 1, FileObject.New(self.Item1, path).Cons(self.Item2));
}
The extension methods capture the operations on the aggregate state and make the resulting fold computation very clear:
IEnumerable<FileObject> FoldImagesInPath(string folderPath, int firstAssignedID) =>
Directory.EnumerateFiles(folderPath)
.Fold(
FileObjectsState.Zero.SetId(firstAssignedID),
FileObjectsState.Add)
.Result();
Obviously using Fold for the use-case you provided is overkill, and that's why I used Zip instead. But the more general problem you were struggling with (functional aggregate state) is what Fold is for.
There is one more extension method I used in the example above: Cons:
public static IEnumerable<T> Cons<T>(this T x, IEnumerable<T> xs)
{
yield return x;
foreach(var a in xs)
{
yield return a;
}
}
More info on cons can be found here
If you want to learn more about using functional technique in C#, please check my library: language-ext. It will give you a ton of stuff that the C# BCL is missing.
Using yeild seems unnecessary:
public IEnumerable<FileObject> EnumerateImagesInPath(string folderPath, int firstAssignedID)
{
foreach (FileObject File in Directory.EnumerateFiles(folderPath)
.Select(FileName => new FileObject(FileName, firstAssignedID++)))
{
yield return File;
}
}
I would like something like this:
public int NumberStudent()
{
int i = 0;
if (db.Tbl_Student.ToList().Count() > 0)
i = db. Tbl_Student.Max(d => d.id);
return i;
}
However, I would like to use it on any table:
public int FindMaxId(string TableName)
{
int i =0;
if ('db.'+TableName+'.ToList().Count() > 0' )
i = db. TableName.Max(d => d.id);
return i ;
}
I know it is wrong, but I'm not sure how to do it.
You can use the IEnumerable/IQueryable extension method DefaultIfEmpty for this.
var maxId = db.Tbl_Student.Select(x => x.Id).DefaultIfEmpty(0).Max();
In general, if you do Q.DefaultIfEmpty(D), it means:
If Q isn't empty, give me Q; otherwise, give me [ D ].
Below I have written a simple wrapper around the existing Max extension method that allows you provide an empty source (the table you were talking about).
Instead of throwing an exception, it will just return the default value of zero.
Original
public static class Extensions
{
public static int MaxId<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, int> selector)
{
if (source.Any())
{
return source.Max(selector);
}
return 0;
}
}
This was my attempt, which as noted by Timothy is actually quite inferior. This is because the sequence will be enumerated twice. Once when calling Any to check if the source sequence has any elements, and again when calling Max.
Improved
public static class Extensions
{
public static int MaxId<TSource>(this IQueryable<TSource> source, Func<TSource, int> selector)
{
return source.Select(selector).DefaultIfEmpty(0).Max();
}
}
This implementation uses Timothy's approach. By calling DefaultIfEmpty, we are making use of deferred execution and the sequence will only be enumerated when calling Max. In addition we are now using IQueryable instead of IEnumerable which means we don't have to enumerate the source before calling this method. As Scott said, should you need it you can create an overload that uses IEnumerable too.
In order to use the extension method, you just need to provide a delegate that returns the id of the source type, exactly the same way you would for Max.
public class Program
{
YourContext context = new YourContext();
public int MaxStudentId()
{
return context.Student.MaxId(s => s.Id);
}
public static void Main(string[] args)
{
Console.WriteLine("Max student id: {0}", MaxStudentId());
}
}
public static class Extensions
{
public static int MaxId<TSource>(this IQueryable<TSource> source, Func<TSource, int> selector)
{
return source.Select(selector).DefaultIfEmpty(0).Max();
}
}
db.Tbl_Student.Aggregate(0, (maxId, s) => Math.Max(maxId, s.Id))
or
db.Tbl_Student.Max(s => (int?)s.Id) ?? 0
Just a little niggle about LINQ syntax. I'm flattening an IEnumerable<IEnumerable<T>> with SelectMany(x => x).
My problem is with the lambda expression x => x. It looks a bit ugly. Is there some static 'identity function' object that I can use instead of x => x? Something like SelectMany(IdentityFunction)?
Unless I misunderstand the question, the following seems to work fine for me in C# 4:
public static class Defines
{
public static T Identity<T>(T pValue)
{
return pValue;
}
...
You can then do the following in your example:
var result =
enumerableOfEnumerables
.SelectMany(Defines.Identity);
As well as use Defines.Identity anywhere you would use a lambda that looks like x => x.
Note: this answer was correct for C# 3, but at some point (C# 4? C# 5?) type inference improved so that the IdentityFunction method shown below can be used easily.
No, there isn't. It would have to be generic, to start with:
public static Func<T, T> IdentityFunction<T>()
{
return x => x;
}
But then type inference wouldn't work, so you'd have to do:
SelectMany(Helpers.IdentityFunction<Foo>())
which is a lot uglier than x => x.
Another possibility is that you wrap this in an extension method:
public static IEnumerable<T> Flatten<T>
(this IEnumerable<IEnumerable<T>> source)
{
return source.SelectMany(x => x);
}
Unfortunately with generic variance the way it is, that may well fall foul of various cases in C# 3... it wouldn't be applicable to List<List<string>> for example. You could make it more generic:
public static IEnumerable<TElement> Flatten<TElement, TWrapper>
(this IEnumerable<TWrapper> source) where TWrapper : IEnumerable<TElement>
{
return source.SelectMany(x => x);
}
But again, you've then got type inference problems, I suspect...
EDIT: To respond to the comments... yes, C# 4 makes this easier. Or rather, it makes the first Flatten method more useful than it is in C# 3. Here's an example which works in C# 4, but doesn't work in C# 3 because the compiler can't convert from List<List<string>> to IEnumerable<IEnumerable<string>>:
using System;
using System.Collections.Generic;
using System.Linq;
public static class Extensions
{
public static IEnumerable<T> Flatten<T>
(this IEnumerable<IEnumerable<T>> source)
{
return source.SelectMany(x => x);
}
}
class Test
{
static void Main()
{
List<List<string>> strings = new List<List<string>>
{
new List<string> { "x", "y", "z" },
new List<string> { "0", "1", "2" }
};
foreach (string x in strings.Flatten())
{
Console.WriteLine(x);
}
}
}
With C# 6.0 and if you reference FSharp.Core you can do:
using static Microsoft.FSharp.Core.Operators
And then you're free to do:
SelectMany(Identity)
With C# 6.0 things are getting better. We can define the identity function in the way suggested by #Sahuagin:
static class Functions
{
public static T It<T>(T item) => item;
}
And then use it in SelectMany the using static constructor:
using Functions;
...
var result = enumerableOfEnumerables.SelectMany(It);
I think it looks very laconic in the such way. I also find the identity function useful when building dictionaries:
class P
{
P(int id, string name) // Sad. We are not getting primary constructors in C# 6.0
{
ID = id;
Name = id;
}
int ID { get; }
int Name { get; }
static void Main(string[] args)
{
var items = new[] { new P(1, "Jack"), new P(2, "Jill"), new P(3, "Peter") };
var dict = items.ToDictionary(x => x.ID, It);
}
}
This may work in the way you want. I realize Jon posted a version of this solution, but he has a second type parameter which is only necessary if the resulting sequence type is different from the source sequence type.
public static IEnumerable<T> Flatten<T>(this IEnumerable<T> source)
where T : IEnumerable<T>
{
return source.SelectMany(item => item);
}
You can get close to what you need. Instead of a regular static function, consider an extension method for your IEnumerable<T>, as if the identity function is of the collection, not the type (a collection can generate the identity function of its items):
public static Func<T, T> IdentityFunction<T>(this IEnumerable<T> enumerable)
{
return x => x;
}
with this, you don't have to specify the type again, and write:
IEnumerable<IEnumerable<T>> deepList = ... ;
var flat = deepList.SelectMany(deepList.IdentityFunction());
This does feel a bit abusive though, and I'd probably go with x=>x. Also, you cannot use it fluently (in chaining), so it will not always be useful.
I'd go with a simple class with a single static property and add as many as required down the line
internal class IdentityFunction<TSource>
{
public static Func<TSource, TSource> Instance
{
get { return x => x; }
}
}
SelectMany(IdentityFunction<Foo>.Instance)