I am trying to build up a mongo query, I very nearly have it working in Linq (95% there), but it looks like there is an unfortunate missing ability in the Linq provider to do the equivalent of an Intersect(coll).Any() or in Mongo parlance AnyIn()
I understand that if I use the fluent library with some builders I can build up a filter that will give me an AnyIn() - but what I am missing is how could I build up the whole query, the initial filter, the aggregation, projection and the filter at the end?
This is a very close approximation of my Linq query - this works exactly as required as long as I am not trying to compare collection membership in the filter
public List<MyResult> ListMyResults(Expression<Func<MyResult, bool>> filter, int skip, int take)
{
// Limit to tennants on main Entity being queried
var ents = ApplyDataRestrictions(db.GetCollection<Entity>("entities").AsQueryAble());
var children = db.GetCollection<Child>("children").AsQueryable();
var chickens = db.GetCollection<Chicken>("chickens").AsQueryable();
// Join a couple collections for their counts
var result = from ent in ents
join c in children on ent.Id equals c.EntityId into kids
join ck in chickens on ent.Id equals ck.EntityId into birds
// Project the results into a MyResult
select new MyResult
{
Id = ent.Id,
AProperty = ent.AProperty,
SomeCollection = ent.SomeCollection,
SomeOtherCollectionTagsMaybe = ent.SomeOtherCollectionTagsMaybe,
TotalKids = kids.Count(),
TotalChickens = birds.Count()
};
if(filter != null)
{
// Apply the filter that was built up from criteria on the MyResults data shape
result = result.Where(filter);
}
result = result.Skip(skip).Take(take);
return result.ToList()
}
public IQueryable<Entity> ApplyDataRestrictions(IQueryable<Entity> query)
{
... restrict results to only those with my tennant id ...
}
Actually it turns out that the same behavior can be achieved without Query Builder and AnyIn. Having IQueryable interface you can try .Any() with Contains() as inner predicate
var inMemoryList = new List<int>() { 3, 4, 5 };
var q = from doc in Col.AsQueryable()
where doc.Collection.Any(x => inMemoryList.Contains(x))
select doc;
or
var q2 = Col.AsQueryable().Where(x => x.Collection.Any(y => inMemoryList.Contains(y)));
I have a list which I get from a database. The structure looks like (which I'm representing with JSON as it's easier for me to visualise)
{id:1
value:"a"
},
{id:1
value:"b"
},
{id:1
value:"c"
},
{id:2
value:"t"
}
As you can see, I have 2 unique ID's, ID 1 and 2. I want to group by the ID. The end result I'd like is
{id:1,
values:["a","b","c"],
},
{id:2,
values["g"]
}
Is this possible with Linq? At the moment, I have a massive complex foreach, which first sorts the list (by ID) and then detects if it's already been added etc but this monstrous loop made me realise I'm doing wrong and honestly, it's too embarrassing to share.
You can group by the item Id and have the resulting type be a Dictionary<int, List<string>>
var result = myList.GroupBy(item => item.Id)
.ToDictionary(item => item.Key,
item => item.Select(i => i.Value).ToList());
You can either use GroupBy method on IEnumerable to create IGrouping object that contains a key and grouped objects or you can use ToLookupto create exactly what you want in result:
yourList.ToLookup(m => m.id, m => m.value);
This creates a hashed collection of keys with their values.
For more information please see below post:
https://www.c-sharpcorner.com/UploadFile/d3e4b1/practical-usage-of-using-tolookup-method-in-linq-C-Sharp/
Just a little more detail to emphasize the difference between the ToLookup approach and the GroupBy approach:
// class definition
public class Item
{
public long Id { get; set; }
public string Value { get; set; }
}
// create your list
var items = new List<Item>
{
new Item{Id = 0, Value = "value0a"},
new Item{Id = 0, Value = "value0b"},
new Item{Id = 1, Value = "value1"}
};
// this approach results in a List<string> (a collection of the values)
var lookup = items.ToLookup(i => i.Id, i => i.Value);
var groupOfValues = lookup[0].ToList();
// this approach results in a List<Item> (a collection of the objects)
var itemsGroupedById = items.GroupBy(i => i.Id).ToList();
var groupOfItems = itemsGroupedById[0].ToList();
So, if you want to work with values only after grouping, then you could take the first approach; if you want to work with objects after grouping, you could take the second approach. And, these are just a couple example implementations, there are plenty of ways to accomplish your goal.
First convert to a Lookup then select into a list, like so:
var groups = list
.ToLookup
(
item => item.ID,
item => item.Value
)
.Select
(
item => new
{
ID = item.Key,
Values = item.ToList()
}
)
.ToList();
The resulting JSON looks like this:
[{"ID":1,"Values":["a","b","c"]},{"ID":2,"Values":["t"]}]
Link to working example on DotNetFiddle.
given a list of ids, I can query all relevant rows by:
context.Table.Where(q => listOfIds.Contains(q.Id));
But how do you achieve the same functionality when the Table has a composite key?
This is a nasty problem for which I don't know any elegant solution.
Suppose you have these key combinations, and you only want to select the marked ones (*).
Id1 Id2
--- ---
1 2 *
1 3
1 6
2 2 *
2 3 *
... (many more)
How to do this is a way that Entity Framework is happy? Let's look at some possible solutions and see if they're any good.
Solution 1: Join (or Contains) with pairs
The best solution would be to create a list of the pairs you want, for instance Tuples, (List<Tuple<int,int>>) and join the database data with this list:
from entity in db.Table // db is a DbContext
join pair in Tuples on new { entity.Id1, entity.Id2 }
equals new { Id1 = pair.Item1, Id2 = pair.Item2 }
select entity
In LINQ to objects this would be perfect, but, too bad, EF will throw an exception like
Unable to create a constant value of type 'System.Tuple`2 (...) Only primitive types or enumeration types are supported in this context.
which is a rather clumsy way to tell you that it can't translate this statement into SQL, because Tuples is not a list of primitive values (like int or string). For the same reason a similar statement using Contains (or any other LINQ statement) would fail.
Solution 2: In-memory
Of course we could turn the problem into simple LINQ to objects like so:
from entity in db.Table.AsEnumerable() // fetch db.Table into memory first
join pair Tuples on new { entity.Id1, entity.Id2 }
equals new { Id1 = pair.Item1, Id2 = pair.Item2 }
select entity
Needless to say that this is not a good solution. db.Table could contain millions of records.
Solution 3: Two Contains statements (incorrect)
So let's offer EF two lists of primitive values, [1,2] for Id1 and [2,3] for Id2. We don't want to use join, so let's use Contains:
from entity in db.Table
where ids1.Contains(entity.Id1) && ids2.Contains(entity.Id2)
select entity
But now the results also contains entity {1,3}! Well, of course, this entity perfectly matches the two predicates. But let's keep in mind that we're getting closer. In stead of pulling millions of entities into memory, we now only get four of them.
Solution 4: One Contains with computed values
Solution 3 failed because the two separate Contains statements don't only filter the combinations of their values. What if we create a list of combinations first and try to match these combinations? We know from solution 1 that this list should contain primitive values. For instance:
var computed = ids1.Zip(ids2, (i1,i2) => i1 * i2); // [2,6]
and the LINQ statement:
from entity in db.Table
where computed.Contains(entity.Id1 * entity.Id2)
select entity
There are some problems with this approach. First, you'll see that this also returns entity {1,6}. The combination function (a*b) does not produce values that uniquely identify a pair in the database. Now we could create a list of strings like ["Id1=1,Id2=2","Id1=2,Id2=3]" and do
from entity in db.Table
where computed.Contains("Id1=" + entity.Id1 + "," + "Id2=" + entity.Id2)
select entity
(This would work in EF6, not in earlier versions).
This is getting pretty messy. But a more important problem is that this solution is not sargable, which means: it bypasses any database indexes on Id1 and Id2 that could have been used otherwise. This will perform very very poorly.
Solution 5: Best of 2 and 3
So the most viable solution I can think of is a combination of Contains and a join in memory: First do the contains statement as in solution 3. Remember, it got us very close to what we wanted. Then refine the query result by joining the result as an in-memory list:
var rawSelection = from entity in db.Table
where ids1.Contains(entity.Id1) && ids2.Contains(entity.Id2)
select entity;
var refined = from entity in rawSelection.AsEnumerable()
join pair in Tuples on new { entity.Id1, entity.Id2 }
equals new { Id1 = pair.Item1, Id2 = pair.Item2 }
select entity;
It's not elegant, messy all the same maybe, but so far it's the only scalable1 solution to this problem I found, and applied in my own code.
Solution 6: Build a query with OR clauses
Using a Predicate builder like Linqkit or alternatives, you can build a query that contains an OR clause for each element in the list of combinations. This could be a viable option for really short lists. With a couple of hundreds of elements, the query will start performing very poorly. So I don't consider this a good solution unless you can be 100% sure that there will always be a small number of elements. One elaboration of this option can be found here.
Solution 7: Unions
There's also a solution using UNIONs that I posted later here.
1As far as the Contains statement is scalable: Scalable Contains method for LINQ against a SQL backend
Solution for Entity Framework Core with SQL Server
🎉 NEW! QueryableValues EF6 Edition has arrived!
The following solution makes use of QueryableValues. This is a library that I wrote to primarily solve the problem of query plan cache pollution in SQL Server caused by queries that compose local values using the Contains LINQ method. It also allows you to compose values of complex types in your queries in a performant way, which will achieve what's being asked in this question.
First you will need to install and set up the library, after doing that you can use any of the following patterns that will allow you to query your entities using a composite key:
// Required to make the AsQueryableValues method available on the DbContext.
using BlazarTech.QueryableValues;
// Local data that will be used to query by the composite key
// of the fictitious OrderProduct table.
var values = new[]
{
new { OrderId = 1, ProductId = 10 },
new { OrderId = 2, ProductId = 20 },
new { OrderId = 3, ProductId = 30 }
};
// Optional helper variable (needed by the second example due to CS0854)
var queryableValues = dbContext.AsQueryableValues(values);
// Example 1 - Using a Join (preferred).
var example1Results = dbContext
.OrderProduct
.Join(
queryableValues,
e => new { e.OrderId, e.ProductId },
v => new { v.OrderId, v.ProductId },
(e, v) => e
)
.ToList();
// Example 2 - Using Any (similar behavior as Contains).
var example2Results = dbContext
.OrderProduct
.Where(e => queryableValues
.Where(v =>
v.OrderId == e.OrderId &&
v.ProductId == e.ProductId
)
.Any()
)
.ToList();
Useful Links
Nuget Package
GitHub Repository
Benchmarks
QueryableValues is distributed under the MIT license.
You can use Union for each composite primary key:
var compositeKeys = new List<CK>
{
new CK { id1 = 1, id2 = 2 },
new CK { id1 = 1, id2 = 3 },
new CK { id1 = 2, id2 = 4 }
};
IQuerable<CK> query = null;
foreach(var ck in compositeKeys)
{
var temp = context.Table.Where(x => x.id1 == ck.id1 && x.id2 == ck.id2);
query = query == null ? temp : query.Union(temp);
}
var result = query.ToList();
You can create a collection of strings with both keys like this (I am assuming that your keys are int type):
var id1id2Strings = listOfIds.Select(p => p.Id1+ "-" + p.Id2);
Then you can just use "Contains" on your db:
using (dbEntities context = new dbEntities())
{
var rec = await context.Table1.Where(entity => id1id2Strings .Contains(entity.Id1+ "-" + entity.Id2));
return rec.ToList();
}
You need a set of objects representing the keys you want to query.
class Key
{
int Id1 {get;set;}
int Id2 {get;set;}
If you have two lists and you simply check that each value appears in their respective list then you are getting the cartesian product of the lists - which is likely not what you want. Instead you need to query the specific combinations required
List<Key> keys = // get keys;
context.Table.Where(q => keys.Any(k => k.Id1 == q.Id1 && k.Id2 == q.Id2));
I'm not completely sure that this is valid use of Entity Framework; you may have issues with sending the Key type to the database. If that happens then you can be creative:
var composites = keys.Select(k => p1 * k.Id1 + p2 * k.Id2).ToList();
context.Table.Where(q => composites.Contains(p1 * q.Id1 + p2 * q.Id2));
You can create an isomorphic function (prime numbers are good for this), something like a hashcode, which you can use to compare the pair of values. As long as the multiplicative factors are co-prime this pattern will be isomorphic (one-to-one) - i.e. the result of p1*Id1 + p2*Id2 will uniquely identify the values of Id1 and Id2 as long as the prime numbers are correctly chosen.
But then you end up in a situation where you're implementing complex concepts and someone is going to have to support this. Probably better to write a stored procedure which takes the valid key objects.
Ran into this issue as well and needed a solution that both did not perform a table scan and also provided exact matches.
This can be achieved by combining Solution 3 and Solution 4 from Gert Arnold's Answer
var firstIds = results.Select(r => r.FirstId);
var secondIds = results.Select(r => r.SecondId);
var compositeIds = results.Select(r => $"{r.FirstId}:{r.SecondId}");
var query = from e in dbContext.Table
//first check the indexes to avoid a table scan
where firstIds.Contains(e.FirstId) && secondIds.Contains(e.SecondId))
//then compare the compositeId for an exact match
//ToString() must be called unless using EF Core 5+
where compositeIds.Contains(e.FirstId.ToString() + ":" + e.SecondId.ToString()))
select e;
var entities = await query.ToListAsync();
For EF Core I use a slightly modified version of the bucketized IN method by EricEJ to map composite keys as tuples. It performs pretty well for small sets of data.
Sample usage
List<(int Id, int Id2)> listOfIds = ...
context.Table.In(listOfIds, q => q.Id, q => q.Id2);
Implementation
public static IQueryable<TQuery> In<TKey1, TKey2, TQuery>(
this IQueryable<TQuery> queryable,
IEnumerable<(TKey1, TKey2)> values,
Expression<Func<TQuery, TKey1>> key1Selector,
Expression<Func<TQuery, TKey2>> key2Selector)
{
if (values is null)
{
throw new ArgumentNullException(nameof(values));
}
if (key1Selector is null)
{
throw new ArgumentNullException(nameof(key1Selector));
}
if (key2Selector is null)
{
throw new ArgumentNullException(nameof(key2Selector));
}
if (!values.Any())
{
return queryable.Take(0);
}
var distinctValues = Bucketize(values);
if (distinctValues.Length > 1024)
{
throw new ArgumentException("Too many parameters for SQL Server, reduce the number of parameters", nameof(values));
}
var predicates = distinctValues
.Select(v =>
{
// Create an expression that captures the variable so EF can turn this into a parameterized SQL query
Expression<Func<TKey1>> value1AsExpression = () => v.Item1;
Expression<Func<TKey2>> value2AsExpression = () => v.Item2;
var firstEqual = Expression.Equal(key1Selector.Body, value1AsExpression.Body);
var visitor = new ReplaceParameterVisitor(key2Selector.Parameters[0], key1Selector.Parameters[0]);
var secondEqual = Expression.Equal(visitor.Visit(key2Selector.Body), value2AsExpression.Body);
return Expression.AndAlso(firstEqual, secondEqual);
})
.ToList();
while (predicates.Count > 1)
{
predicates = PairWise(predicates).Select(p => Expression.OrElse(p.Item1, p.Item2)).ToList();
}
var body = predicates.Single();
var clause = Expression.Lambda<Func<TQuery, bool>>(body, key1Selector.Parameters[0]);
return queryable.Where(clause);
}
class ReplaceParameterVisitor : ExpressionVisitor
{
private ParameterExpression _oldParameter;
private ParameterExpression _newParameter;
public ReplaceParameterVisitor(ParameterExpression oldParameter, ParameterExpression newParameter)
{
_oldParameter = oldParameter;
_newParameter = newParameter;
}
protected override Expression VisitParameter(ParameterExpression node)
{
if (ReferenceEquals(node, _oldParameter))
return _newParameter;
return base.VisitParameter(node);
}
}
/// <summary>
/// Break a list of items tuples of pairs.
/// </summary>
private static IEnumerable<(T, T)> PairWise<T>(this IEnumerable<T> source)
{
var sourceEnumerator = source.GetEnumerator();
while (sourceEnumerator.MoveNext())
{
var a = sourceEnumerator.Current;
sourceEnumerator.MoveNext();
var b = sourceEnumerator.Current;
yield return (a, b);
}
}
private static TKey[] Bucketize<TKey>(IEnumerable<TKey> values)
{
var distinctValueList = values.Distinct().ToList();
// Calculate bucket size as 1,2,4,8,16,32,64,...
var bucket = 1;
while (distinctValueList.Count > bucket)
{
bucket *= 2;
}
// Fill all slots.
var lastValue = distinctValueList.Last();
for (var index = distinctValueList.Count; index < bucket; index++)
{
distinctValueList.Add(lastValue);
}
var distinctValues = distinctValueList.ToArray();
return distinctValues;
}
In the absence of a general solution, I think there are two things to consider:
Avoid multi-column primary keys (will make unit testing easier too).
But if you have to, chances are that one of them will reduce the
query result size to O(n) where n is the size of the ideal query
result. From here, its Solution 5 from Gerd Arnold above.
For example, the problem leading me to this question was querying order lines, where the key is order id + order line number + order type, and the source had the order type being implicit. That is, the order type was a constant, order ID would reduce the query set to order lines of relevant orders, and there would usually be 5 or less of these per order.
To rephrase: If you have a composite key, changes are that one of them have very few duplicates. Apply Solution 5 from above with that.
I tried this solution and it worked with me and the output query was perfect without any parameters
using LinqKit; // nuget
var customField_Ids = customFields?.Select(t => new CustomFieldKey { Id = t.Id, TicketId = t.TicketId }).ToList();
var uniqueIds1 = customField_Ids.Select(cf => cf.Id).Distinct().ToList();
var uniqueIds2 = customField_Ids.Select(cf => cf.TicketId).Distinct().ToList();
var predicate = PredicateBuilder.New<CustomFieldKey>(false); //LinqKit
var lambdas = new List<Expression<Func<CustomFieldKey, bool>>>();
foreach (var cfKey in customField_Ids)
{
var id = uniqueIds1.Where(uid => uid == cfKey.Id).Take(1).ToList();
var ticketId = uniqueIds2.Where(uid => uid == cfKey.TicketId).Take(1).ToList();
lambdas.Add(t => id.Contains(t.Id) && ticketId.Contains(t.TicketId));
}
predicate = AggregateExtensions.AggregateBalanced(lambdas.ToArray(), (expr1, expr2) =>
{
var invokedExpr = Expression.Invoke(expr2, expr1.Parameters.Cast<Expression>());
return Expression.Lambda<Func<CustomFieldKey, bool>>
(Expression.OrElse(expr1.Body, invokedExpr), expr1.Parameters);
});
var modifiedCustomField_Ids = repository.GetTable<CustomFieldLocal>()
.Select(cf => new CustomFieldKey() { Id = cf.Id, TicketId = cf.TicketId }).Where(predicate).ToArray();
I ended up writing a helper for this problem that relies on System.Linq.Dynamic.Core;
Its a lot of code and don't have time to refactor at the moment but input / suggestions appreciated.
public static IQueryable<TEntity> WhereIsOneOf<TEntity, TSource>(this IQueryable<TEntity> dbSet,
IEnumerable<TSource> source,
Expression<Func<TEntity, TSource,bool>> predicate) where TEntity : class
{
var (where, pDict) = GetEntityPredicate(predicate, source);
return dbSet.Where(where, pDict);
(string WhereStr, IDictionary<string, object> paramDict) GetEntityPredicate(Expression<Func<TEntity, TSource, bool>> func, IEnumerable<TSource> source)
{
var firstP = func.Parameters[0];
var binaryExpressions = RecurseBinaryExpressions((BinaryExpression)func.Body);
var i = 0;
var paramDict = new Dictionary<string, object>();
var res = new List<string>();
foreach (var sourceItem in source)
{
var innerRes = new List<string>();
foreach (var bExp in binaryExpressions)
{
var emp = ToEMemberPredicate(firstP, bExp);
var val = emp.GetKeyValue(sourceItem);
var pName = $"#{i++}";
paramDict.Add(pName, val);
var str = $"{emp.EntityMemberName} {emp.SQLOperator} {pName}";
innerRes.Add(str);
}
res.Add( "(" + string.Join(" and ", innerRes) + ")");
}
var sRes = string.Join(" || ", res);
return (sRes, paramDict);
}
EMemberPredicate ToEMemberPredicate(ParameterExpression firstP, BinaryExpression bExp)
{
var lMember = (MemberExpression)bExp.Left;
var rMember = (MemberExpression)bExp.Right;
var entityMember = lMember.Expression == firstP ? lMember : rMember;
var keyMember = entityMember == lMember ? rMember : lMember;
return new EMemberPredicate(entityMember, keyMember, bExp.NodeType);
}
List<BinaryExpression> RecurseBinaryExpressions(BinaryExpression e, List<BinaryExpression> runningList = null)
{
if (runningList == null) runningList = new List<BinaryExpression>();
if (e.Left is BinaryExpression lbe)
{
var additions = RecurseBinaryExpressions(lbe);
runningList.AddRange(additions);
}
if (e.Right is BinaryExpression rbe)
{
var additions = RecurseBinaryExpressions(rbe);
runningList.AddRange(additions);
}
if (e.Left is MemberExpression && e.Right is MemberExpression)
{
runningList.Add(e);
}
return runningList;
}
}
Helper class:
public class EMemberPredicate
{
public readonly MemberExpression EntityMember;
public readonly MemberExpression KeyMember;
public readonly PropertyInfo KeyMemberPropInfo;
public readonly string EntityMemberName;
public readonly string SQLOperator;
public EMemberPredicate(MemberExpression entityMember, MemberExpression keyMember, ExpressionType eType)
{
EntityMember = entityMember;
KeyMember = keyMember;
KeyMemberPropInfo = (PropertyInfo)keyMember.Member;
EntityMemberName = entityMember.Member.Name;
SQLOperator = BinaryExpressionToMSSQLOperator(eType);
}
public object GetKeyValue(object o)
{
return KeyMemberPropInfo.GetValue(o, null);
}
private string BinaryExpressionToMSSQLOperator(ExpressionType eType)
{
switch (eType)
{
case ExpressionType.Equal:
return "==";
case ExpressionType.GreaterThan:
return ">";
case ExpressionType.GreaterThanOrEqual:
return ">=";
case ExpressionType.LessThan:
return "<";
case ExpressionType.LessThanOrEqual:
return "<=";
case ExpressionType.NotEqual:
return "<>";
default:
throw new ArgumentException($"{eType} is not a handled Expression Type.");
}
}
}
Use Like so:
// This can be a Tuple or whatever.. If Tuple, then y below would be .Item1, etc.
// This data structure is up to you but is what I use.
[FromBody] List<CustomerAddressPk> cKeys
var res = await dbCtx.CustomerAddress
.WhereIsOneOf(cKeys, (x, y) => y.CustomerId == x.CustomerId
&& x.AddressId == y.AddressId)
.ToListAsync();
Hope this helps others.
in Case of composite key you can use another idlist and add a condition for that in your code
context.Table.Where(q => listOfIds.Contains(q.Id) && listOfIds2.Contains(q.Id2));
or you can use one another trick create a list of your keys by adding them
listofid.add(id+id1+......)
context.Table.Where(q => listOfIds.Contains(q.Id+q.id1+.......));
I tried this on EF Core 5.0.3 with the Postgres provider.
context.Table
.Select(entity => new
{
Entity = entity,
CompositeKey = entity.Id1 + entity.Id2,
})
.Where(x => compositeKeys.Contains(x.CompositeKey))
.Select(x => x.Entity);
This produced SQL like:
SELECT *
FROM table AS t
WHERE t.Id1 + t.Id2 IN (#__compositeKeys_0)),
Caveats
this should only be used where the combination of Id1 and Id2 will always produce a unique result (e.g., they're both UUIDs)
this cannot use indexes, though you could save the composite key to the db with an index
I have around 200K records in a list and I'm looping through them and forming another collection. This works fine on my local 64 bit Win 7 but when I move it to a Windows Server 2008 R2, it takes a lot of time. There is difference of about an hour almost!
I tried looking at Compiled Queries and am still figuring it out.
For various reasons, we cant do a database join and retrieve the child values
Here is the code:
//listOfDetails is another collection
List<SomeDetails> myDetails = null;
foreach (CustomerDetails myItem in customerDetails)
{
var myList = from ss in listOfDetails
where ss.CustomerNumber == myItem.CustomerNum
&& ss.ID == myItem.ID
select ss;
myDetails = (List<SomeDetails>)(myList.ToList());
myItem.SomeDetails = myDetails;
}
I would do this differently:
var lookup = listOfDetails.ToLookup(x => new { x.CustomerNumber, x.ID });
foreach(var item in customerDetails)
{
var key = new { CustomerNumber = item.CustomerNum, item.ID };
item.SomeDetails = lookup[key].ToList();
}
The big benefit of this code is that it only has to loop through the listOfDetails once to build the lookup - which is nothing more than a hash map. After that we just get the values using the key, which is very fast as that is what hash maps are built for.
I don't know why you have the difference in performance, but you should be able to make that code perform better.
//listOfDetails is another collection
List<SomeDetails> myDetails = ...;
detailsGrouped = myDetails.ToLookup(x => new { x.CustomerNumber, x.ID });
foreach (CustomerDetails myItem in customerDetails)
{
var myList = detailsGrouped[new { CustomerNumber = myItem.CustomerNum, myItem.ID }];
myItem.SomeDetails = myList.ToList();
}
The idea here is to avoid the repeated looping on myDetails, and build a hash based lookup instead. Once that is built, it is very cheap to do a lookup.
The inner ToList() is forcing an evaluation on each loop, which has got to hurt. The SelectMany might let you avoid the ToList, something like this :
var details = customerDetails.Select( item => listOfDetails
.Where( detail => detail.CustomerNumber == item.CustomerNum)
.Where( detail => detail.ID == item.ID)
.SelectMany( i => i as SomeDetails )
);
If you first get all the SomeDetails and then assign them to the items, it might speed up. Or it might not. You should really profile to see where the time is being taken.
I think you'd probably benefit from a join here, so:
var mods = customerDetails
.Join(
listOfDetails,
x => Tuple.Create(x.ID, x.CustomerNum),
x => Tuple.Create(x.ID, x.CustomerNumber),
(a, b) => new {custDet = a, listDet = b})
.GroupBy(x => x.custDet)
.Select(g => new{custDet = g.Key,items = g.Select(x => x.listDet).ToList()});
foreach(var mod in mods)
{
mod.custDet.SomeDetails = mod.items;
}
I didn't compile this code...
With a join the matching of items from one list against another is done by building a hashtable-like collection (Lookup) of the second list in O(n) time. Then it's a matter of iterating the first list and pulling items from the Lookup. As pulling data from a hashtable is O(1), the iterate/match phase also only takes O(n), as does the subsequent GroupBy. So in all the operation should take ~O(3n) which is equivalent to O(n), where n is the length of the longer list.