I have a list of Func defining an ordering:
var ordering = new List<Func<Person, IComparable>>
{ x => x.Surname, x => x.FirstName };
I can order the results with something like...
people = people.OrderBy(ordering[0]).ThenBy(ordering[1]);
I'm trying to figure how to do the above when the list can contain any number of sequential orderings. Is it possible?
people = people.OrderBy(ordering[0]).ThenBy(ordering[1]).ThenBy(ordering[2]);
is the same as
var orderedPeople = people.OrderBy(ordering[0]);
orderedPeople = orderedPeople.ThenBy(ordering[1]);
orderedPeople = orderedPeople.ThenBy(ordering[2]);
people = orderedPeople;
so you simply write a loop like this:
if (ordering.Count != 0)
{
var orderedPeople = people.OrderBy(ordering[0]);
for (int i = 1; i < ordering.Count; i++)
{
orderedPeople = orderedPeople.ThenBy(ordering[i]);
}
people = orderedPeople;
}
As others have mentioned, you can use a loop to do this.
If you prefer, you can also use the Aggregate operator:
// Requires a non-empty ordering sequence.
var result2 = ordering.Skip(1)
.Aggregate(people.OrderBy(ordering.First()), Enumerable.ThenBy);
(or)
// Shorter and more "symmetric" but potentially more inefficient.
// x => true should work because OrderBy is a stable sort.
var result = ordering.Aggregate(people.OrderBy(x => true), Enumerable.ThenBy);
You should be able to do something similar to this
people = people.OrderBy(ordering[0])
foreach(var order in ordering.Skip(1))
{
people = people.ThenBy(order);
}
Alternately
for(i = 0; i < ordering.Count; i++)
{
people = i == 0 ? people.OrderBy(ordering[i]) : people.ThenBy(ordering[i]);
}
Remember that LINQ execution is deferred. You can build up the expression sequentially before accessing the results, doing something like:
var ordered = unordered.OrderBy(ordering.First());
foreach (var orderingItem in ordering.Skip(1))
{
ordered = ordered.ThenBy(orderingItem);
}
You might want to do this with dynamically building up you're expression. More info here: Dynamic LINQ and Dynamic Lambda expressions?
Related
How can I improve the performance of below code block. The distinctUniqueIDs list is having more than 10000 records. I have tried parallel execution ,but its not yielding much difference. Is there any other space for optimization
for (int i = 0; i < distinctUniqueIDs.Count; i++)
{
long distinctUniqueID = distinctUniqueIDs[i];
try
{
if (!SmartAppConfigMapDict.ContainsKey(distinctUniqueID))
{
appconfigTemp = new SmartAppconfigRootElementMap();
filteredList = validAppConfig.Where(m => m.UniqueID == distinctUniqueID);
if (filteredList?.Count() > 0)
{
appconfigTemp.AppConfig = filteredList.First();
appconfigTemp.RootElements = filteredList.ToDictionary(m =>
m.RootElementVersionID, m => m.RootElementName);
appconfigTemp.RootElementPrimaryKeyMaps = filteredList.ToDictionary(m =>
m.RootElementVersionID, m => new RootElementPrimaryKeyMap
{
RootElementName = m.RootElementName,
RootElementVersionID = m.RootElementVersionID,
ChildElementDesc = m.ChildElementDesc,
ChildElementName = m.ChildElementName,
ChildElementPath = m.ChildElementPath,
ActualRootElementVersionID = m.ActualRootElementVersionID,
ActualRootElementName = m.ActualRootElementName
});
SmartAppConfigMapDict.Add(distinctUniqueID, appconfigTemp);
if (queryRootElementMap.ContainsKey(appconfigTemp.AppConfig.QueryID))
{
queryRootElementMap[appconfigTemp.AppConfig.QueryID]
.Add(appconfigTemp);
}
else
{
appConfigRootMapSubList = new List<SmartAppconfigRootElementMap>();
appConfigRootMapSubList.Add(appconfigTemp);
queryRootElementMap.Add(appconfigTemp.AppConfig.QueryID,
appConfigRootMapSubList);
}
}
}
}
catch (Exception)
{
continue;
}
}
The culprit should be
filteredList = validAppConfig.Where(m => m.UniqueID == distinctUniqueID);
First, LINQ Where is inefficient method with linear time complexity O(N). Using such methods inside loops is not recommended.
Second, due to LINQ deferred execution, the aforementioned linear search is executed several times - basically by every operator applied to filteredList - filteredList?.Count(), filteredList.First() and 2 filteredList.ToDictionary(…) calls.
What you can do is to prepare in advance a fast hash based lookup data structure (for instance, Lookup) outside the loop and use it inside.
e.g. add something like this outside the loop:
var validAppConfigsByUniqueID = validAppConfig.ToLookup(m => m.UniqueID);
and the inside replace
filteredList = validAppConfig.Where(m => m.UniqueID == distinctUniqueID);
if (filteredList?.Count() > 0)
with
var filteredList = validAppConfigsByUniqueID[distinctUniqueID];
if (filteredList.Any())
Note that validAppConfigsByUniqueID[distinctUniqueID] operation has constant time complexity O(1). And returned enumerable is already buffered, so iterating it several times is not an issue.
I'm trying to loop through a list in a lambda expression.
Here is my code I though i could do.
var table = shipment.ShipmentItems.ToList();
for (int i = 0; i <= table.Count; i++)
{
shippedItems = shipment.Order.OrderItems.Where(x => x.Id != table[0].OrderItemId); ;
}
I need to use each index in table, table[1].OrderItemId, table[0].OrderItemId etc
Whats the best way to do this.
Cheers
is the end result all of the order items that are not in the shipment items list?
var orderItemIds = shipment.ShipmentItems.Select(si => si.OrderItemId).ToList();
var shippedItems = shipment.Order.OrderItems.Where(oi => !orderItemIds.Contains(oi.Id));
I have a List<foo> that has two properties:
start_time and end_time
Assume that we have 100 records in that list. How can I check if all intervals are of equal length? In other terms, I'd like to know if the values of the difference between end and start times for all foo objects are equal.
Where (value = end_time-start_time).
Is it possible to achieve this in a single LINQ line?
Thanks, appreciate it.
Sure, you can write something like this:
var list = new List<foo>();
var areAllEqual = list.GroupBy(l => (l.end_time - l.start_time)).Count() == 1;
Alternatively, if you want to do more with that information:
var differences = list.GroupBy(l => (l.end_time - l.start_time)).ToList();
var numDifferences = differences.Count();
var areAllEqual = numDifferences == 1;
var firstDifference = differences.First().Key;
var allDifferences = differences.Select(g => g.Key);
Something like this should work.
var first = items.First;
var duration = first.end_time - first.start_time;
var allEqual = items.All(i => duration == (i.end_time - i.start_time))
I make one database trip to get a list of entities.
I then would like to separate this list into 2 lists, one for the entities that have not expired (using a start and end) which i call TopListings and another which are regular listings, those that have expired or have start/end date as null (the ones that are not TopListings)
I am not entirely sure which filtering is fasted to separate into 2 lists, should I get the toplist first, then filter second list based on what is NOT in the top list for second?
var listings = ListingAdapter.GetMapListings(criteria);
var topListings = listings.Where(x => x.TopStartDate >= DateTime.Now && x.TopExpireDate >= DateTime.Now);
//I AM NOT SURE WHAT THIS LINE SHOULD BE
var regularListings = listings.Where(x => x.TopStartDate < DateTime.Now || x.TopExpireDate < DateTime.Now || x.TopStartDate == null || x.TopExpireDate == null );
Thank you
You might want to use a LookUp
like this:
var lookup = listings.ToLookup(x => x.TopStartDate >= DateTime.Now && x.TopExpireDate >= DateTime.Now);
var topListings = lookup[true];
var regularListings = lookup[false]; // I assume everything not a topListing is a regular listing.
If this isnt enough, you could create an enum
enum ListingType { Top, Regular, WhatEver };
...
var lookup = listings.ToLookUp(determineListingType); // pass a methoddelegate that determines the listingtype for an element.
...
var topListings = lookup[ListingType.Top];
var regularListings = lookup[ListingType.Regular];
var whateverListings = lookup[ListingType.WhatEver];
In this case, it would probably be easier to use a loop, instead of Linq operators:
var topListings = new List<Listing>();
var regularListings = new List<Listing>();
foreach (var x in listings)
{
if (x.TopStartDate >= DateTime.Now && x.TopExpireDate >= DateTime.Now)
topListings.Add(x);
else
regularListings.Add(x);
}
This is also more efficient, because the list is enumerated only once.
Take a look at the 'Except' operator to make things a little easier. You might have to add a .ToList() on topListings first though.
var regularListings = listings.Except(topListings);
http://blogs.msdn.com/b/charlie/archive/2008/07/12/the-linq-set-operators.aspx
Make use of regular foreach loop that's straight forward. You can iterate through listing with one go and add items to appropriate collections. If you are LINQ kind of guy, ForEach extension is what you are looking for:
var topListings = new List<Listing>();
var regularListings = new List<Listing>();
listing.ForEach(item=>{
if (x.TopStartDate < DateTime.Now
|| // I've inverted the condition, since it is faster-one or two conditions will be checked, instead of always two
x.TopExpireDate < DateTime.Now)
regularListings.Add(x);
else
topListings.Add(x);
});
Suppose I have a collection of strings.
List<string> lines = new List<string>(File.ReadAllLines("Test.txt"));
And a regular expression to search for a match in that collection:
Regex r = new Regex(#"some regular expression");
How can I get the indeces of elements, matching the regex?
I have three ideas.
1st:
var indeces1 = lines.Where(l => r.IsMatch(l))
.Select(l => lines.IndexOf(l));
foreach (int i in indeces1)
{
Console.WriteLine(i);//Do the index-based task instead...
}
What I don't like about it, is using IndexOf over the original collection. Am I wrong and it's OK?
var indeces2 = lines.Select((l, i) => new { Line = l, Index = i })
.Where(o => r.IsMatch(o.Line));
foreach (var o in indeces2)
{
Console.WriteLine(o.Index);//Do the index-based task instead...
}
It seems to be better than the 1st one, but is there a way to do the same thing without creating an anonymous object?
And the last one:
for (int i = 0; i < lines.Count; i++)
{
if (r.IsMatch(lines[i]))
{
Console.WriteLine(i);//Do the index-based task instead...
}
}
Actually I have this one working now. But as I do love LINQ, I wanted to have a LINQ way to do the same.
So is there a better LINQ way to do this simple task?
If you love LINQ, you can use Enumerable.Range for simpler:
var indexes = Enumerable.Range(0, lines.Count)
.Where(i => r.IsMatch(lines[i]));
Edit:
Instead of using File.ReadAllLines to get all lines into memory:
List<string> lines = new List<string>(File.ReadAllLines("Test.txt"));
If your file is large, you should consider to use ReadLines which is deferred execution for more efficient:
var lines = File.ReadLines("C:\\Test.txt"));
If you just need indices then why don't you try .Select(t => t.Index); in your option 2. (at the end) to get IEnumerable of indices only. You will get rid of the Anonymous object.
So your query would be:
var indeces2 = lines.Select((l, i) => new { Line = l, Index = i })
.Where(o => r.IsMatch(o.Line))
.Select(t => t.Index);
In this case I would go with your humble iterator version:
for (int i = 0; i < lines.Count; i++)
{
if (r.IsMatch(lines[i]))
{
Console.WriteLine(i);//Do the index-based task instead...
}
}
For this scenario, LINQ does not really reduce the line count and increase readability, in comparisson to option 3. So I would go for the simplest version in this case.