I have an ArrayList ids containing String objects that are IDs, and another ArrayList objs containing objects which have a string ID field. Right now I have code, to find which ids don't have a match in objs, which looks like this:
var missing = new List<string>();
foreach (MyObj obj in objs)
{
if (!ids.Contains(obj.ID))
{
missing.Add(obj.ID);
}
}
This works fine. But I rewrote it to this an exercise to better "think in LINQ":
var missing = objs.Cast<MyObj>().Select(x => x.ID).Except(ids.Cast<string>());
I expected this LINQ to be slower than the foreach + Contains approach (especially due to the Cast calls), but the LINQ runs significantly faster. What is the LINQ approach doing differently that gives the performance benefit?
LINQ Except uses HashSet internally, which has O(1) Contains method performance, when it's O(n) for ArrayList. That's why it's faster.
But as Tim pointed in his comment, your Except approach does not really produce any results. It just defines a query. The query is executed as soon as you need results. And it may be executed more than once. You should add ToList() call to get List<T> explicitly:
var missing = objs.Cast<MyObj>().Select(x => x.ID).Except(ids.Cast<string>()).ToList();
By the way, why are you using ArrayList instead of generic List<T>?
Except uses a HashSet<T> (or something similar) to efficiently find what object are the same, while your code uses the less-efficient List<T>.Contains (or similar) method.
Related
Sometimes Resharper warns about:
Possible multiple enumeration of IEnumerable
There's an SO question on how to handle this issue, and the ReSharper site also explains things here. It has some sample code that tells you to do this instead:
IEnumerable<string> names = GetNames().ToList();
My question is about this specific suggestion: won't this still result in enumerating through the collection twice in the 2 for-each loops?
GetNames() returns an IEnumerable. So if you store that result:
IEnumerable foo = GetNames();
Then every time you enumerate foo, the GetNames() method is called again (not literally, I can't find a link that properly explains the details, but see IEnumerable.GetEnumerator()).
Resharper sees this, and suggests you to store the result of enumerating GetNames() in a local variable, for example by materializing it in a list:
IEnumerable fooEnumerated = GetNames().ToList();
This will make sure that the GetNames() result is only enumerated once, as long as you refer to fooEnumerated.
This does matter because you usually want to enumerate only once, for example when GetNames() performs a (slow) database call.
Because you materialized the results in a list, it doesn't matter anymore that you enumerate fooEnumerated twice; you'll be iterating over an in-memory list twice.
I found this to have the best and easiest way to understand multiple enumerations.
C# LINQ: Possible Multiple Enumeration of IEnumerable
https://helloacm.com/c-linq-possible-multiple-enumeration-of-ienumerable-resharper/
GetNames() is not called twice. The implementation of IEnumerable.GetEnumerator() is called each time you want to enumerate the collection with foreach. If within the IEnumerable.GetEnumerator() some expensive calculation is made this might be a reason to consider.
Yes, you'll be enumerating it twice with no doubt. but the point is if GetNames() returns a lazy linq query which is very expensive to compute then it will compute twice without a call to ToList() or ToArray().
Just because a method returns IEnumerable doesn't mean there will be deferred execution.
E.g.
IEnumerable<string> GetNames()
{
Console.WriteLine("Yolo");
return new string[] { "Fred", "Wilma", "Betty", "Barney" };
}
var names = GetNames(); // Yolo prints out here! and only here!
foreach(name in names)
{
// Some code...
}
foreach(name in names)
{
// Some code...
}
Back to the question, if:
a. There is deferred execution (e.g. LINQ - .Where(), .Select(), etc.): then the method returns a "promise" that knows how to iterate over the collection. So when calling .ToList() this iteration happens and we store the list in memory.
b. There is no deferred execution (e.g. method returns a List): then assuming GetNames returns a list, it's basically like doing a .ToList() on that list
var names = GetNames().ToList();
// 1 2 3
Yolo Prints out
List is returned
ReturnedList.ToList() is called
PS, I left the following comment on Resharper's documentation
Hi,
Can you please make it clear in the documentation that this'd only be
an issue if GetNames() implements deferred execution?
For example, if GetNames() uses yield under the hood or implements a
deferred execution approach like most LINQ statements for example
(.Select(), .Where(), etc.)
Otherwise, if under the hood GetNames() is not returning an
IEnumerable that implements defered execution, then there is no
performance or data integrity issues here. E.g. if GetNames returns
List
I have an array of objects, where one field is a boolean field called includeInReport. In a certain case, I want to default that to always be true. I know it's as easy as doing this:
foreach (var item in awards)
{
item.IncludeInReport = true;
}
But is there an equilivent way to do this with linq? It's more to satisfy my curiosity then anything... My first thought was to do this...
awards.Select(a => new Award{ IncludeInReport = true, SomeFiled = a.SomeField, .... }
But since I have a few fields in my object, I didn't want to have to type out all of the fields and it's just clutter on the screen at that point. Thanks!
ForEach is sort of linq:
awards.ForEach(item => item.IncludeInReport = true);
But Linq is not about updating values. So you are not using the right tool.
Let me quantify "sort of linq". ForEach is not Linq, but a method on List<T>. However, the syntax is similar to Linq.
Here's the correct code:
awards = awards.Select(a => {a.IncludeInReport = true; return a;});
LINQ follows functional programming ideas and thus doesn't want you to change (mutate) existing variables.
So instead in the code above we generate a new list (haven't changed any existing values) and then overwrite our original list (this is outside LINQ so we no longer care about functional programming ideas).
Since you are starting with an array, you can use the Array.ForEach method:
Array.ForEach(awards, a => a.IncludeInReport = true);
This isn't LINQ, but in this case you don't need LINQ. As others have mentioned, you can't mutate items via LINQ. If you have a List<T> you could use its ForEach method in a similar fashion. Eric Lippert discusses this issue in more depth here: "foreach" vs "ForEach".
There is no mutating method available in Linq. Linq is useful for querying, ordering, filtering, joining, and projecting data. If you need to mutate it, you already have a very clean, clear method of doing so: your loop.
List<T> exposes a ForEach method to write something that reminds you of Linq (but isn't). You can then provide an Action<T> or some other delegate/function that applies your mutation to each element in turn. (Ahmed Mageed's answer also mentions the slightly different Array.ForEach method.) You can write your own extension method to do the same with IEnumerable<T> (which would then be generally more applicable than either aforementioned method and also be available for your array). But I encourage you to simply keep your loop, it's not exactly dirty.
You can do something like that:
awards.AsParallel().ForAll(item => item.IncludeInReport = true)
That makes that action parallel if possible.
Look at this:
var query = myDic.Select(x => x.Key).Except(myHashSet);
or
var query = myDic.Select(x => x.Key).where(y=>!myHashSet.Contains(y))
i guess a O(1) version of Contains will be invoked due to polymophism in first case.
Don't know about except though.
Update
Exept is also O(1) in my case.
why linq's `except` extension method does not have Except<TSource> Method (IEnumerable<TSource>,HashSet<TSource>) overload?
If your myDic is a normal .NET Dictionary then I will go with
myDic.Keys.Except(myHashSet)
for readability.
To speak of your options, the first one is O(n+m) whereas the second O(n), neither of which tells you which finishes first for your collection size. When in doubt race both the horses.
#sehe's answer is O(n+m) too but most probably it will be faster than your O(n+m) solution.
var query = myDic.Select(x => x.Key).Except(myHashSet);
The Except will be the extension on IEnumerable (the result of Select). This is not O(1)
myHashSet.Contains(y)
Is indeed calling the member function, which is O(n)
Consider
new HashSet<K>(myDic.Select(x => x.Key)).ExceptWith(myHashSet);
Also look at HashSet<>.SymmetricExceptWith()
I am using some of the LINQ select stuff to create some collections, which return IEnumerable<T>.
In my case I need a List<T>, so I am passing the result to List<T>'s constructor to create one.
I am wondering about the overhead of doing this. The items in my collections are usually in the millions, so I need to consider this.
I assume, if the IEnumerable<T> contains ValueTypes, it's the worst performance.
Am I right? What about Ref Types? Either way there is also the cost of calling, List<T>.Add a million times, right?
Any way to solve this? Like can I "overload" methods like LINQ Select using extension methods)?
No, there's no particular penalty for the element type being value types, assuming you're using IEnumerable<T> instead of IEnumerable. You won't get any boxing going on.
If you actually know the size of the result beforehand (which the result of Select probably won't) you might want to consider creating the list with that size of buffer, then using AddRange to add the values. Otherwise the list will have to resize its buffer every time it fills it.
For instance, instead of doing:
Foo[] foo = new Foo[100];
IEnumerable<string> query = foo.Select(foo => foo.Name);
List<string> queryList = new List<string>(query);
you might do:
Foo[] foo = new Foo[100];
IEnumerable<string> query = foo.Select(x => x.Name);
List<string> queryList = new List<string>(foo.Length);
queryList.AddRange(query);
You know that calling Select will produce a sequence of the same length as the original query source, but nothing in the execution environment has that information as far as I'm aware.
It would be best to avoid the need for a list. If you can keep your caller using IEnumerable<T>, you will save yourself some headaches.
LINQ's ToList() will take your enumerable, and just construct a new List<T> directly from it, using the List<T>(IEnumerable<T>) constructor. This will be the same as making the list yourself, performance wise (although LINQ does a null check, as well).
If you're adding the elements yourself, use the AddRange method instead of the Add. ToList() is very similar to AddRange (since it's using the constructor which takes IEnumerable<T>), which typically will be your best bet, performance wise, in this case.
Generally speaking, a method returning IEnumerable doesn't have to evaluate any of the items before the item is actually needed. So, theoretically, when you return an IEnumerable none of you items need to exist at that time.
So creating a list means that you will really need to evaluate items, get them and place them somewhere in memory (at least their references). There is nothing that can be done about this - if you really need to have a list.
A number of other responders have already provided ideas for how to improve the performance of copying an IEnumerable<T> into a List<T> - I don't think that much can be added on that front.
However, based on what you have described you need to do with the results, and the fact that you get rid of the list when you're done (which I presume means that the intermediate results are not interesting) - you may want to consider whether you really need to materialize a List<T>.
Rather than creating a List<T> and operating on the contents of that list - consider writing a lazy extension method for IEnumerable<T> that performs the same processing logic. I've done this myself in a number of cases, and writing such logic in C# is not so bad when using the [yield return][1] syntax supported by the compiler.
This approach works well if all you're trying to do is visit each item in the results and collection some information from it. Often, what you need to do is just visit each element in the collection on demand, do some processing with it, and then move on. This approach is generally more scalable and performant that creating a copy of the collection just to iterate over it.
Now, this advice may not work for you for other reasons, but it's worth considering as an alternative to finding the most efficient way to materialize a very large list.
Don't pass an IEnumerable to the List constructor. IEnumerable has a ToList() method, which can't possibly do worse than that, and has nicer syntax (IMHO).
That said, that only changes the answer to your question to "it depends" - in particular, it depends on what the IEnumerable actually is behind the scenes. If it happens to be a List already, then ToList will effectively be free, of course will go much faster than if it were another type. It's still not super-fast.
The best way to solve this, of course, is to try to figure out how to do your processing on an IEnumerable rather than a List. That may not be possible.
Edit: Some people in the comments are debating whether or not ToList() will actually be any faster when called on a List than if not, and whether ToList() will be any faster than the list constructor. At this point, speculating is getting pointless, so here's some code:
using System;
using System.Linq;
using System.Collections.Generic;
public static class ToListTest
{
public static int Main(string[] args)
{
List<int> intlist = new List<int>();
for (int i = 0; i < 1000000; i++)
intlist.Add(i);
IEnumerable<int> intenum = intlist;
for (int i = 0; i < 1000; i++)
{
List<int> foo = intenum.ToList();
}
return 0;
}
}
Running this code with an IEnumerable that's really a List goes about 6-10 times faster than if I replace it with a LinkedList or Stack (on my pokey 2.4 GHz P4, using Mono 1.2.6). Conceivably this could be due to some unfortunate interaction between ToList() and the particular implementations of LinkedList or Stack's enumerations, but at least the point remains: speed will depend on the underlying type of the IEnumerable. That said, even with a List as the source, it still takes 6 seconds for me to make 1000 ToList() calls, so it's far from free.
The next question is whether ToList() is any more intelligent than the List constructor. The answer to that turns out to be no: the List constructor is just as fast as ToList(). In hindsight, Jon Skeet's reasoning makes sense - I was just forgetting that ToList() was an extension method. I still (much) prefer ToList() syntactically, but there's no performance reason to use it.
So the short version is that the best answer is still "don't convert to a List if you can avoid it". Barring that, actual performance will depend drastically on what the IEnumerable actually is, but at best it'll be sluggish, as opposed to glacial. I've amended my original answer to reflect this.
From reading the various comments and the question I get the following requirements
for a collection of data you need to run through that collection, filter out some objects and then perform some transformation on the remaining objects. If thats the case you can do something like this:
var result = from item in collection
where item.Id > 10 //or some more sensible condition
select Operation(item);
and if you need to the perform more filtering and transformation you can nest your LINQ queries like
var result = from filteredItem in (from item in collection
where item.Id > 10 //or some more sensible condition
select Operation(item))
where filteredItem.SomePropertyAvailableAfterFirstTransformation == "new"
select SecondTransfomation(filteredItem);
I got a Function that returns a Collection<string>, and that calls itself recursively to eventually return one big Collection<string>.
Now, i just wonder what the best approach to merge the lists? Collection.CopyTo() only copies to string[], and using a foreach() loop feels like being inefficient. However, since I also want to filter out duplicates, I feel like i'll end up with a foreach that calls Contains() on the Collection.
I wonder, is there a more efficient way to have a recursive function that returns a list of strings without duplicates? I don't have to use a Collection, it can be pretty much any suitable data type.
Only exclusion, I'm bound to Visual Studio 2005 and .net 3.0, so no LINQ.
Edit: To clarify: The Function takes a user out of Active Directory, looks at the Direct Reports of the user, and then recursively looks at the direct reports of every user. So the end result is a List of all users that are in the "command chain" of a given user.Since this is executed quite often and at the moment takes 20 Seconds for some users, i'm looking for ways to improve it. Caching the result for 24 Hours is also on my list btw., but I want to see how to improve it before applying caching.
If you're using List<> you can use .AddRange to add one list to the other list.
Or you can use yield return to combine lists on the fly like this:
public IEnumerable<string> Combine(IEnumerable<string> col1, IEnumerable<string> col2)
{
foreach(string item in col1)
yield return item;
foreach(string item in col2)
yield return item;
}
You might want to take a look at Iesi.Collections and Extended Generic Iesi.Collections (because the first edition was made in 1.1 when there were no generics yet).
Extended Iesi has an ISet class which acts exactly as a HashSet: it enforces unique members and does not allow duplicates.
The nifty thing about Iesi is that it has set operators instead of methods for merging collections, so you have the choice between a union (|), intersection (&), XOR (^) and so forth.
I think HashSet<T> is a great help.
The HashSet<T> class provides
high performance set operations. A set
is a collection that contains no
duplicate elements, and whose elements
are in no particular order.
Just add items to it and then use CopyTo.
Update: HashSet<T> is in .Net 3.5
Maybe you can use Dictionary<TKey, TValue>. Setting a duplicate key to a dictionary will not raise an exception.
Can you pass the Collection into you method by refernce so that you can just add items to it, that way you dont have to return anything. This is what it might look like if you did it in c#.
class Program
{
static void Main(string[] args)
{
Collection<string> myitems = new Collection<string>();
myMthod(ref myitems);
Console.WriteLine(myitems.Count.ToString());
Console.ReadLine();
}
static void myMthod(ref Collection<string> myitems)
{
myitems.Add("string");
if(myitems.Count <5)
myMthod(ref myitems);
}
}
As Stated by #Zooba Passing by ref is not necessary here, if you passing by value it will also work.
As far as merging goes:
I wonder, is there a more efficient
way to have a recursive function that
returns a list of strings without
duplicates? I don't have to use a
Collection, it can be pretty much any
suitable data type.
Your function assembles a return value, right? You're splitting the supplied list in half, invoking self again (twice) and then merging those results.
During the merge step, why not just check before you add each string to the result? If it's already there, skip it.
Assuming you're working with sorted lists of course.