I have a list of lists which I want to find the intersection for like this:
var list1 = new List<int>() { 1, 2, 3 };
var list2 = new List<int>() { 2, 3, 4 };
var list3 = new List<int>() { 3, 4, 5 };
var listOfLists = new List<List<int>>() { list1, list2, list3 };
// expected intersection is List<int>() { 3 };
Is there some way to do this with IEnumerable.Intersect()?
EDIT:
I should have been more clear on this: I really have a list of lists, I don't know how many there will be, the three lists above was just an example, what I have is actually an IEnumerable<IEnumerable<SomeClass>>
SOLUTION
Thanks for all great answers. It turned out there were four options for solving this: List+aggregate (#Marcel Gosselin), List+foreach (#JaredPar, #Gabe Moothart), HashSet+aggregate (#jesperll) and HashSet+foreach (#Tony the Pony). I did some performance testing on these solutions (varying number of lists, number of elements in each list and random number max size.
It turns out that for most situations the HashSet performs better than the List (except with large lists and small random number size, because of the nature of HashSet I guess.)
I couldn't find any real difference between the foreach method and the aggregate method (the foreach method performs slightly better.)
To me, the aggregate method is really appealing (and I'm going with that as the accepted answer) but I wouldn't say it's the most readable solution.. Thanks again all!
How about:
var intersection = listOfLists
.Skip(1)
.Aggregate(
new HashSet<T>(listOfLists.First()),
(h, e) => { h.IntersectWith(e); return h; }
);
That way it's optimized by using the same HashSet throughout and still in a single statement. Just make sure that the listOfLists always contains at least one list.
You can indeed use Intersect twice. However, I believe this will be more efficient:
HashSet<int> hashSet = new HashSet<int>(list1);
hashSet.IntersectWith(list2);
hashSet.IntersectWith(list3);
List<int> intersection = hashSet.ToList();
Not an issue with small sets of course, but if you have a lot of large sets it could be significant.
Basically Enumerable.Intersect needs to create a set on each call - if you know that you're going to be doing more set operations, you might as well keep that set around.
As ever, keep a close eye on performance vs readability - the method chaining of calling Intersect twice is very appealing.
EDIT: For the updated question:
public List<T> IntersectAll<T>(IEnumerable<IEnumerable<T>> lists)
{
HashSet<T> hashSet = null;
foreach (var list in lists)
{
if (hashSet == null)
{
hashSet = new HashSet<T>(list);
}
else
{
hashSet.IntersectWith(list);
}
}
return hashSet == null ? new List<T>() : hashSet.ToList();
}
Or if you know it won't be empty, and that Skip will be relatively cheap:
public List<T> IntersectAll<T>(IEnumerable<IEnumerable<T>> lists)
{
HashSet<T> hashSet = new HashSet<T>(lists.First());
foreach (var list in lists.Skip(1))
{
hashSet.IntersectWith(list);
}
return hashSet.ToList();
}
Try this, it works but I'd really like to get rid of the .ToList() in the aggregate.
var list1 = new List<int>() { 1, 2, 3 };
var list2 = new List<int>() { 2, 3, 4 };
var list3 = new List<int>() { 3, 4, 5 };
var listOfLists = new List<List<int>>() { list1, list2, list3 };
var intersection = listOfLists.Aggregate((previousList, nextList) => previousList.Intersect(nextList).ToList());
Update:
Following comment from #pomber, it is possible to get rid of the ToList() inside the Aggregate call and move it outside to execute it only once. I did not test for performance whether previous code is faster than the new one. The change needed is to specify the generic type parameter of the Aggregate method on the last line like below:
var intersection = listOfLists.Aggregate<IEnumerable<int>>(
(previousList, nextList) => previousList.Intersect(nextList)
).ToList();
You could do the following
var result = list1.Intersect(list2).Intersect(list3).ToList();
This is my version of the solution with an extension method that I called IntersectMany.
public static IEnumerable<TResult> IntersectMany<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, IEnumerable<TResult>> selector)
{
using (var enumerator = source.GetEnumerator())
{
if(!enumerator.MoveNext())
return new TResult[0];
var ret = selector(enumerator.Current);
while (enumerator.MoveNext())
{
ret = ret.Intersect(selector(enumerator.Current));
}
return ret;
}
}
So the usage would be something like this:
var intersection = (new[] { list1, list2, list3 }).IntersectMany(l => l).ToList();
This is my one-row solution for List of List (ListOfLists) without intersect function:
var intersect = ListOfLists.SelectMany(x=>x).Distinct().Where(w=> ListOfLists.TrueForAll(t=>t.Contains(w))).ToList()
This should work for .net 4 (or later)
After searching the 'net and not really coming up with something I liked (or that worked), I slept on it and came up with this. Mine uses a class (SearchResult) which has an EmployeeId in it and that's the thing I need to be common across lists. I return all records that have an EmployeeId in every list. It's not fancy, but it's simple and easy to understand, just what I like. For small lists (my case) it should perform just fine—and anyone can understand it!
private List<SearchResult> GetFinalSearchResults(IEnumerable<IEnumerable<SearchResult>> lists)
{
Dictionary<int, SearchResult> oldList = new Dictionary<int, SearchResult>();
Dictionary<int, SearchResult> newList = new Dictionary<int, SearchResult>();
oldList = lists.First().ToDictionary(x => x.EmployeeId, x => x);
foreach (List<SearchResult> list in lists.Skip(1))
{
foreach (SearchResult emp in list)
{
if (oldList.Keys.Contains(emp.EmployeeId))
{
newList.Add(emp.EmployeeId, emp);
}
}
oldList = new Dictionary<int, SearchResult>(newList);
newList.Clear();
}
return oldList.Values.ToList();
}
Here's an example just using a list of ints, not a class (this was my original implementation).
static List<int> FindCommon(List<List<int>> items)
{
Dictionary<int, int> oldList = new Dictionary<int, int>();
Dictionary<int, int> newList = new Dictionary<int, int>();
oldList = items[0].ToDictionary(x => x, x => x);
foreach (List<int> list in items.Skip(1))
{
foreach (int i in list)
{
if (oldList.Keys.Contains(i))
{
newList.Add(i, i);
}
}
oldList = new Dictionary<int, int>(newList);
newList.Clear();
}
return oldList.Values.ToList();
}
This is a simple solution if your lists are all small. If you have larger lists, it's not as performing as hash set:
public static IEnumerable<T> IntersectMany<T>(this IEnumerable<IEnumerable<T>> input)
{
if (!input.Any())
return new List<T>();
return input.Aggregate(Enumerable.Intersect);
}
Related
I have a list of lists which I want to find the intersection for like this:
var list1 = new List<int>() { 1, 2, 3 };
var list2 = new List<int>() { 2, 3, 4 };
var list3 = new List<int>() { 3, 4, 5 };
var listOfLists = new List<List<int>>() { list1, list2, list3 };
// expected intersection is List<int>() { 3 };
Is there some way to do this with IEnumerable.Intersect()?
EDIT:
I should have been more clear on this: I really have a list of lists, I don't know how many there will be, the three lists above was just an example, what I have is actually an IEnumerable<IEnumerable<SomeClass>>
SOLUTION
Thanks for all great answers. It turned out there were four options for solving this: List+aggregate (#Marcel Gosselin), List+foreach (#JaredPar, #Gabe Moothart), HashSet+aggregate (#jesperll) and HashSet+foreach (#Tony the Pony). I did some performance testing on these solutions (varying number of lists, number of elements in each list and random number max size.
It turns out that for most situations the HashSet performs better than the List (except with large lists and small random number size, because of the nature of HashSet I guess.)
I couldn't find any real difference between the foreach method and the aggregate method (the foreach method performs slightly better.)
To me, the aggregate method is really appealing (and I'm going with that as the accepted answer) but I wouldn't say it's the most readable solution.. Thanks again all!
How about:
var intersection = listOfLists
.Skip(1)
.Aggregate(
new HashSet<T>(listOfLists.First()),
(h, e) => { h.IntersectWith(e); return h; }
);
That way it's optimized by using the same HashSet throughout and still in a single statement. Just make sure that the listOfLists always contains at least one list.
You can indeed use Intersect twice. However, I believe this will be more efficient:
HashSet<int> hashSet = new HashSet<int>(list1);
hashSet.IntersectWith(list2);
hashSet.IntersectWith(list3);
List<int> intersection = hashSet.ToList();
Not an issue with small sets of course, but if you have a lot of large sets it could be significant.
Basically Enumerable.Intersect needs to create a set on each call - if you know that you're going to be doing more set operations, you might as well keep that set around.
As ever, keep a close eye on performance vs readability - the method chaining of calling Intersect twice is very appealing.
EDIT: For the updated question:
public List<T> IntersectAll<T>(IEnumerable<IEnumerable<T>> lists)
{
HashSet<T> hashSet = null;
foreach (var list in lists)
{
if (hashSet == null)
{
hashSet = new HashSet<T>(list);
}
else
{
hashSet.IntersectWith(list);
}
}
return hashSet == null ? new List<T>() : hashSet.ToList();
}
Or if you know it won't be empty, and that Skip will be relatively cheap:
public List<T> IntersectAll<T>(IEnumerable<IEnumerable<T>> lists)
{
HashSet<T> hashSet = new HashSet<T>(lists.First());
foreach (var list in lists.Skip(1))
{
hashSet.IntersectWith(list);
}
return hashSet.ToList();
}
Try this, it works but I'd really like to get rid of the .ToList() in the aggregate.
var list1 = new List<int>() { 1, 2, 3 };
var list2 = new List<int>() { 2, 3, 4 };
var list3 = new List<int>() { 3, 4, 5 };
var listOfLists = new List<List<int>>() { list1, list2, list3 };
var intersection = listOfLists.Aggregate((previousList, nextList) => previousList.Intersect(nextList).ToList());
Update:
Following comment from #pomber, it is possible to get rid of the ToList() inside the Aggregate call and move it outside to execute it only once. I did not test for performance whether previous code is faster than the new one. The change needed is to specify the generic type parameter of the Aggregate method on the last line like below:
var intersection = listOfLists.Aggregate<IEnumerable<int>>(
(previousList, nextList) => previousList.Intersect(nextList)
).ToList();
You could do the following
var result = list1.Intersect(list2).Intersect(list3).ToList();
This is my version of the solution with an extension method that I called IntersectMany.
public static IEnumerable<TResult> IntersectMany<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, IEnumerable<TResult>> selector)
{
using (var enumerator = source.GetEnumerator())
{
if(!enumerator.MoveNext())
return new TResult[0];
var ret = selector(enumerator.Current);
while (enumerator.MoveNext())
{
ret = ret.Intersect(selector(enumerator.Current));
}
return ret;
}
}
So the usage would be something like this:
var intersection = (new[] { list1, list2, list3 }).IntersectMany(l => l).ToList();
This is my one-row solution for List of List (ListOfLists) without intersect function:
var intersect = ListOfLists.SelectMany(x=>x).Distinct().Where(w=> ListOfLists.TrueForAll(t=>t.Contains(w))).ToList()
This should work for .net 4 (or later)
After searching the 'net and not really coming up with something I liked (or that worked), I slept on it and came up with this. Mine uses a class (SearchResult) which has an EmployeeId in it and that's the thing I need to be common across lists. I return all records that have an EmployeeId in every list. It's not fancy, but it's simple and easy to understand, just what I like. For small lists (my case) it should perform just fine—and anyone can understand it!
private List<SearchResult> GetFinalSearchResults(IEnumerable<IEnumerable<SearchResult>> lists)
{
Dictionary<int, SearchResult> oldList = new Dictionary<int, SearchResult>();
Dictionary<int, SearchResult> newList = new Dictionary<int, SearchResult>();
oldList = lists.First().ToDictionary(x => x.EmployeeId, x => x);
foreach (List<SearchResult> list in lists.Skip(1))
{
foreach (SearchResult emp in list)
{
if (oldList.Keys.Contains(emp.EmployeeId))
{
newList.Add(emp.EmployeeId, emp);
}
}
oldList = new Dictionary<int, SearchResult>(newList);
newList.Clear();
}
return oldList.Values.ToList();
}
Here's an example just using a list of ints, not a class (this was my original implementation).
static List<int> FindCommon(List<List<int>> items)
{
Dictionary<int, int> oldList = new Dictionary<int, int>();
Dictionary<int, int> newList = new Dictionary<int, int>();
oldList = items[0].ToDictionary(x => x, x => x);
foreach (List<int> list in items.Skip(1))
{
foreach (int i in list)
{
if (oldList.Keys.Contains(i))
{
newList.Add(i, i);
}
}
oldList = new Dictionary<int, int>(newList);
newList.Clear();
}
return oldList.Values.ToList();
}
This is a simple solution if your lists are all small. If you have larger lists, it's not as performing as hash set:
public static IEnumerable<T> IntersectMany<T>(this IEnumerable<IEnumerable<T>> input)
{
if (!input.Any())
return new List<T>();
return input.Aggregate(Enumerable.Intersect);
}
Quick Question, See this code:
List<int> result = new List<int>();
var list = new List<int> { 1, 2, 3, 4 };
list.Select(value =>
{
result.Add(value);//Does not work??
return value;
});
And :
result.Count == 0 //true
Why result.Add(value) not executed?
However this not executed, Another question that is have a way do a foreach on a IEnumerable with Extention Method?
Except this way: IEnumerable.ToList().Foreach(p=>...)
Why result.Add(value) not executed?
This is because LINQ uses deferred execution. Until you actually enumerate the results (the return of Select), the delegates will not execute.
To demonstrate, try the following:
List<int> result = new List<int>();
var list = new List<int> { 1, 2, 3, 4 };
var results = list.Select(value =>
{
result.Add(value);//Does not work??
return value;
});
foreach(var item in results)
{
// Just iterating through this will cause the above to execute...
}
That being said, this is a bad idea. LINQ queries should not have side effects if you can avoid it. Think of Select as a way to transform your data, not execute code.
However this not executed, Another question that is have a way do a foreach on a IEnumerable with Extention Method?
You could write your own extension method:
public static void ForEach<T>(this IEnumerable<T> items, Action<T> action)
{
foreach(var item in items)
action(item);
}
However, I would recommend not doing this. For details, refer to Eric Lippert's post on the subject.
Select is lazy and the execution is deferred until you start enumerating over the results. You need to consume the resultset by calling .ToArray for example or by looping over the result:
list.Select(value =>
{
result.Add(value);//Does not work??
return value;
}).ToArray();
List<int> result = new List<int>();
var list = new List<int> { 1, 2, 3, 4 };
list.ForEach(delegate(int sValue)
{
result.Add(sValue);
});
This works but all and adds 1 2 3 4 into result. Test it out. I just did.
I want to compare two lists with the same number of elements, and find the number of differences between them. Right now, I have this code (which works):
public static int CountDifferences<T> (this IList<T> list1, IList<T> list2)
{
if (list1.Count != list2.Count)
throw new ArgumentException ("Lists must have the same number of elements", "list2");
int count = 0;
for (int i = 0; i < list1.Count; i++) {
if (!EqualityComparer<T>.Default.Equals (list1[i], list2[i]))
count++;
}
return count;
}
This feels messy to me, and it seems like there must be a more elegant way to achieve it. Is there a way, perhaps, to combine the two lists into a single list of tuples, then simple examine each element of the new list to see if both elements are equal?
Since order in the list does count this would be my approach:
public static int CountDifferences<T>(this IList<T> list1, IList<T> list2)
{
if (list1.Count != list2.Count)
throw new ArgumentException("Lists must have the same number of elements", "list2");
int count = list1.Zip(list2, (a, b) => a.Equals(b) ? 0 : 1).Sum();
return count;
}
Simply merging the lists using Enumerable.Zip() then summing up the differences, still O(n) but this just enumerates the lists once.
Also this approach would work on any two IEnumerable of the same type since we do not use the list indexer (besides obviously in your count comparison in the guard check).
I think your approach is fine, but you could use LINQ to simplify your function:
public static int CountDifferences<T>(this IList<T> list1, IList<T> list2)
{
if(list1.Count != list2.Count)
throw new ArgumentException("Lists must have same # elements", "list2");
return list1.Where((t, i) => !Equals(t, list2[i])).Count();
}
The way you have it written in the question, I don't think Intersect does what you're looking for. For example, say you have:
var list1 = new List<int> { 1, 2, 3, 4, 6, 8 };
var list2 = new List<int> { 1, 2, 4, 5, 6, 8 };
If you run list1.CountDifferences(list2), I'm assuming that you want to get back 2 since elements 2 and 3 are different. Intersect in this case will return 5 since the lists have 5 elements in common. So, if you're looking for 5 then Intersect is the way to go. If you're looking to return 2 then you could use the LINQ statement above.
Try something like this:
var result = list1.Intersect(list2);
var differences = list1.Count - result.Count();
If order counts:
var result = a.Where((x,i) => x !=b[i]);
var differences = result.Count();
You want the Intersect extension method of Enumerable.
public static int CountDifferences<T> (this IList<T> list1, IList<T> list2)
{
if (list1.Count != list2.Count)
throw new ArgumentException ("Lists must have the same number of elements", "list2");
return list1.Count - list1.Intersect(list2).Count();
}
You can use the extension method Zip of List.
List<int> lst1 = new List<int> { 1, 2, 3, 4, 5 };
List<int> lst2 = new List<int> { 6, 2, 9, 4, 5 };
int cntDiff = lst1.Zip(lst2, (a, b) => a != b).Count(a => a);
// Output is 2
I have IEnumerable<string> which looks like {"First", "1", "Second", "2", ... }.
I need to iterate through the list and create IEnumerable<Tuple<string, string>> where Tuples will look like:
"First", "1"
"Second", "2"
So I need to create pairs from a list I have to get pairs as mentioned above.
A lazy extension method to achieve this is:
public static IEnumerable<Tuple<T, T>> Tupelize<T>(this IEnumerable<T> source)
{
using (var enumerator = source.GetEnumerator())
while (enumerator.MoveNext())
{
var item1 = enumerator.Current;
if (!enumerator.MoveNext())
throw new ArgumentException();
var item2 = enumerator.Current;
yield return new Tuple<T, T>(item1, item2);
}
}
Note that if the number of elements happens to not be even this will throw. Another way would be to use this extensions method to split the source collection into chunks of 2:
public static IEnumerable<IEnumerable<T>> Chunk<T>(this IEnumerable<T> list, int batchSize)
{
var batch = new List<T>(batchSize);
foreach (var item in list)
{
batch.Add(item);
if (batch.Count == batchSize)
{
yield return batch;
batch = new List<T>(batchSize);
}
}
if (batch.Count > 0)
yield return batch;
}
Then you can do:
var tuples = items.Chunk(2)
.Select(x => new Tuple<string, string>(x.First(), x.Skip(1).First()))
.ToArray();
Finally, to use only existing extension methods:
var tuples = items.Where((x, i) => i % 2 == 0)
.Zip(items.Where((x, i) => i % 2 == 1),
(a, b) => new Tuple<string, string>(a, b))
.ToArray();
morelinq contains a Batch extension method which can do what you want:
var str = new string[] { "First", "1", "Second", "2", "Third", "3" };
var tuples = str.Batch(2, r => new Tuple<string, string>(r.FirstOrDefault(), r.LastOrDefault()));
You could do something like:
var pairs = source.Select((value, index) => new {Index = index, Value = value})
.GroupBy(x => x.Index / 2)
.Select(g => new Tuple<string, string>(g.ElementAt(0).Value,
g.ElementAt(1).Value));
This will get you an IEnumerable<Tuple<string, string>>. It works by grouping the elements by their odd/even positions and then expanding each group into a Tuple. The benefit of this approach over the Zip approach suggested by BrokenGlass is that it only enumerates the original enumerable once.
It is however hard for someone to understand at first glance, so I would either do it another way (ie. not using linq), or document its intention next to where it is used.
You can make this work using the LINQ .Zip() extension method:
IEnumerable<string> source = new List<string> { "First", "1", "Second", "2" };
var tupleList = source.Zip(source.Skip(1),
(a, b) => new Tuple<string, string>(a, b))
.Where((x, i) => i % 2 == 0)
.ToList();
Basically the approach is zipping up the source Enumerable with itself, skipping the first element so the second enumeration is one off - that will give you the pairs ("First, "1"), ("1", "Second"), ("Second", "2").
Then we are filtering the odd tuples since we don't want those and end up with the right tuple pairs ("First, "1"), ("Second", "2") and so on.
Edit:
I actually agree with the sentiment of the comments - this is what I would consider "clever" code - looks smart, but has obvious (and not so obvious) downsides:
Performance: the Enumerable has to
be traversed twice - for the same
reason it cannot be used on
Enumerables that consume their
source, i.e. data from network
streams.
Maintenance: It's not obvious what
the code does - if someone else is
tasked to maintain the code there
might be trouble ahead, especially
given point 1.
Having said that, I'd probably use a good old foreach loop myself given the choice, or with a list as source collection a for loop so I can use the index directly.
IEnumerable<T> items = ...;
using (var enumerator = items.GetEnumerator())
{
while (enumerator.MoveNext())
{
T first = enumerator.Current;
bool hasSecond = enumerator.MoveNext();
Trace.Assert(hasSecond, "Collection must have even number of elements.");
T second = enumerator.Current;
var tuple = new Tuple<T, T>(first, second);
//Now you have the tuple
}
}
Starting from NET 6.0, you can use
Enumerable.Chunk(IEnumerable, Int32)
var tuples = new[] {"First", "1", "Second", "2", "Incomplete" }
.Chunk(2)
.Where(chunk => chunk.Length == 2)
.Select(chunk => (chunk[0], chunk[1]));
If you are using .NET 4.0, then you can use tuple object (see http://mutelight.org/articles/finally-tuples-in-c-sharp.html). Together with LINQ it should give you what you need. If not, then you probably need to define your own tuples to do that or encode those strings like for example "First:1", "Second:2" and then decode it (also with LINQ).
I have an interesting problem, and I can't seem to figure out the lambda expression to make this work.
I have the following code:
List<string[]> list = GetSomeData(); // Returns large number of string[]'s
List<string[]> list2 = GetSomeData2(); // similar data, but smaller subset
List<string[]> newList = list.FindAll(predicate(string[] line){
return (???);
});
I want to return only those records in list in which element 0 of each string[] is equal to one of the element 0's in list2.
list contains data like this:
"000", "Data", "more data", "etc..."
list2 contains data like this:
"000", "different data", "even more different data"
Fundamentally, i could write this code like this:
List<string[]> newList = new List<string[]>();
foreach(var e in list)
{
foreach(var e2 in list2)
{
if (e[0] == e2[0])
newList.Add(e);
}
}
return newList;
But, i'm trying to use generics and lambda's more, so i'm looking for a nice clean solution. This one is frustrating me though.. maybe a Find inside of a Find?
EDIT:
Marc's answer below lead me to experiment with a varation that looks like this:
var z = list.Where(x => list2.Select(y => y[0]).Contains(x[0])).ToList();
I'm not sure how efficent this is, but it works and is sufficiently succinct. Anyone else have any suggestions?
You could join? I'd use two steps myself, though:
var keys = new HashSet<string>(list2.Select(x => x[0]));
var data = list.Where(x => keys.Contains(x[0]));
If you only have .NET 2.0, then either install LINQBridge and use the above (or similar with a Dictionary<> if LINQBridge doesn't include HashSet<>), or perhaps use nested Find:
var data = list.FindAll(arr => list2.Find(arr2 => arr2[0] == arr[0]) != null);
note though that the Find approach is O(n*m), where-as the HashSet<> approach is O(n+m)...
You could use the Intersect extension method in System.Linq, but you would need to provide an IEqualityComparer to do the work.
static void Main(string[] args)
{
List<string[]> data1 = new List<string[]>();
List<string[]> data2 = new List<string[]>();
var result = data1.Intersect(data2, new Comparer());
}
class Comparer : IEqualityComparer<string[]>
{
#region IEqualityComparer<string[]> Members
bool IEqualityComparer<string[]>.Equals(string[] x, string[] y)
{
return x[0] == y[0];
}
int IEqualityComparer<string[]>.GetHashCode(string[] obj)
{
return obj.GetHashCode();
}
#endregion
}
Intersect may work for you.
Intersect finds all the items that are in both lists.
Ok re-read the question. Intersect doesn't take the order into account.
I have written a slightly more complex linq expression that will return a list of items that are in the same position (index) with the same value.
List<String> list1 = new List<String>() {"000","33", "22", "11", "111"};
List<String> list2 = new List<String>() {"000", "22", "33", "11"};
List<String> subList = list1.Select ((value, index) => new { Value = value, Index = index})
.Where(w => list2.Skip(w.Index).FirstOrDefault() == w.Value )
.Select (s => s.Value).ToList();
Result: {"000", "11"}
Explanation of the query:
Select a set of values and position of that value.
Filter that set where the item in the same position in the second list has the same value.
Select just the value (not the index as well).
Note I used:
list2.Skip(w.Index).FirstOrDefault()
//instead of
list2[w.Index]
So that it will handle lists of different lengths.
If you know the lists will be the same length or list1 will always be shorter then list2[w.Index] would probably a bit faster.