Related
I recently learned about List's .ConvertAll extension. I used it a couple times in code today at work to convert a large list of my objects to a list of some other object. It seems to work really well. However I'm unsure how efficient or fast this is compared to just iterating the list and converting the object. Does .ConvertAll use anything special to speed up the conversion process or is it just a short hand way of converting Lists without having to set up a loop?
No better way to find out than to go directly to the source, literally :)
http://referencesource.microsoft.com/#mscorlib/system/collections/generic/list.cs#dbcc8a668882c0db
As you can see, there's no special magic going on. It just iterates over the list and creates a new item by the converter function that you specify.
To be honest, I was not aware of this method. The more idiomatic .NET way to do this kind of projection is through the use of the Select extension method on IEnumerable<T> like so: source.Select(input => new Something(input.Name)). The advantage of this is threefold:
It's more idomatic as I said, the ConvertAll is likely a remnant of the pre-C#3.0 days. It's not a very arcane method by any means and ConvertAll is a pretty clear description, but it might still be better to stick to what other people know, which is Select.
It's available on all IEnumerable<T>, while ConvertAll only works on instances of List<T>. It doesn't matter if it's an array, a list or a dictionary, Select works with all of them.
Select is lazy. It doesn't do anything until you iterate over it. This means that it returns an IEnumerable<TOutput> which you can then convert to a list by calling ToList() or not if you don't actually need a list. Or if you just want to convert and retrieve the first two items out of a list of a million items, you can simply do source.Select(input => new Something(input.Name)).Take(2).
But if your question is purely about the performance of converting a whole list to another list, then ConvertAll is likely to be somewhat faster as it's less generic than a Select followed by a ToList (it knows that a list has a size and can directly access elements by index from the underlying array for instance).
Decompiled using ILSPy:
public List<TOutput> ConvertAll<TOutput>(Converter<T, TOutput> converter)
{
if (converter == null)
{
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.converter);
}
List<TOutput> list = new List<TOutput>(this._size);
for (int i = 0; i < this._size; i++)
{
list._items[i] = converter(this._items[i]);
}
list._size = this._size;
return list;
}
Create a new list.
Populate the new list by iterating over the current instance, executing the specified delegate.
Return the new list.
Does .ConvertAll use anything special to speed up the conversion
process or is it just a short hand way of converting Lists without
having to set up a loop?
It doesn't do anything special with regards to conversion (what "special" thing could it do?) It is directly modifying the private _items and _size members, so it might be trivially faster under some circumstances.
As usual, if the solution makes you more productive, code easier to read, etc. use it until profiling reveals a compelling performance reason to not use it.
It's the second way you described it - basically a short-hand way without setting up a loop.
Here's the guts of ConvertAll():
List<TOutput> list = new List<TOutput>(this._size);
for (int index = 0; index < this._size; ++index)
list._items[index] = converter(this._items[index]);
list._size = this._size;
return list;
Where TOutput is whatever type you're converting to, and converter is a delegate indicating the method that will do the conversion.
So it loops through the List you passed in, running each element through the method you specify, and then returns a new List of the specified type.
For precise timing in your scenarios you need to measure yourself.
Do not expect any miracles - it have to be O(n) operation since each element need to be converted and added to destination list.
Consider using Enumerable.Select instead as it will do lazy evaluation that may allow avoiding second copy of large list, especially you you need to do any filtering of items along the way.
For example:
List<MyClass> myList = new List<MyClass>();
...
// add lots of members...
...
MyClass myClass = myList[25];
Will asking for index 25 take much longer than asking for index 1, or does it use some quick algorithm to jump straight to the 25th item?
Thanks!
Internally List<T> is implemented as array (which grows when you're adding new items) so accessing of n-th element will be O(1) operation. (Therefore there will be no difference in speed between getting myList[1] and myList[25].)
Excerpt from the List<T>.Item property documentation:
Retrieving the value of this property is an O(1) operation; setting the property is also an O(1) operation.
I can imagine how slow would be .NET applications if List<T> had to jump through all items before getting n-th...
From the Item property of List<T>
Retrieving the value of this property is an O(1) operation; setting the property is also an O(1) operation.
No, it's very fast. In fact it's not an algorithm at all*; the backing store for List<T> is just a T[] array; so all it has to do is jump to a known location in memory.
In abstract terms, think of it like this: since the elements of an array reside in a contiguous block of memory, you can imagine the array as a number line. Does it take you any longer to find "10" on a number line than "1"? No -- you know exactly how the numbers are laid out, so all you have to do is look straight at 10. You don't have to scroll your eyes through 1, 2, 3, etc., in other words.
Granted, that's a highly non-technical analogy; but it's pretty consistent with how accessing an element of an array works.
*A calculation is required, yes: the address of the first element in the array plus the product the element size with the index. But to call this an "algorithm" would be a stretch; and anyway, it is a constant-time operation regardless.
No, removing and insertion on the other hand is dependent on where you remove an element, since it is a dynamic array.
http://en.wikipedia.org/wiki/Dynamic_array
List<T> uses T[] internally so indexing is supported directly by the underlying data structure.
I am using some of the LINQ select stuff to create some collections, which return IEnumerable<T>.
In my case I need a List<T>, so I am passing the result to List<T>'s constructor to create one.
I am wondering about the overhead of doing this. The items in my collections are usually in the millions, so I need to consider this.
I assume, if the IEnumerable<T> contains ValueTypes, it's the worst performance.
Am I right? What about Ref Types? Either way there is also the cost of calling, List<T>.Add a million times, right?
Any way to solve this? Like can I "overload" methods like LINQ Select using extension methods)?
No, there's no particular penalty for the element type being value types, assuming you're using IEnumerable<T> instead of IEnumerable. You won't get any boxing going on.
If you actually know the size of the result beforehand (which the result of Select probably won't) you might want to consider creating the list with that size of buffer, then using AddRange to add the values. Otherwise the list will have to resize its buffer every time it fills it.
For instance, instead of doing:
Foo[] foo = new Foo[100];
IEnumerable<string> query = foo.Select(foo => foo.Name);
List<string> queryList = new List<string>(query);
you might do:
Foo[] foo = new Foo[100];
IEnumerable<string> query = foo.Select(x => x.Name);
List<string> queryList = new List<string>(foo.Length);
queryList.AddRange(query);
You know that calling Select will produce a sequence of the same length as the original query source, but nothing in the execution environment has that information as far as I'm aware.
It would be best to avoid the need for a list. If you can keep your caller using IEnumerable<T>, you will save yourself some headaches.
LINQ's ToList() will take your enumerable, and just construct a new List<T> directly from it, using the List<T>(IEnumerable<T>) constructor. This will be the same as making the list yourself, performance wise (although LINQ does a null check, as well).
If you're adding the elements yourself, use the AddRange method instead of the Add. ToList() is very similar to AddRange (since it's using the constructor which takes IEnumerable<T>), which typically will be your best bet, performance wise, in this case.
Generally speaking, a method returning IEnumerable doesn't have to evaluate any of the items before the item is actually needed. So, theoretically, when you return an IEnumerable none of you items need to exist at that time.
So creating a list means that you will really need to evaluate items, get them and place them somewhere in memory (at least their references). There is nothing that can be done about this - if you really need to have a list.
A number of other responders have already provided ideas for how to improve the performance of copying an IEnumerable<T> into a List<T> - I don't think that much can be added on that front.
However, based on what you have described you need to do with the results, and the fact that you get rid of the list when you're done (which I presume means that the intermediate results are not interesting) - you may want to consider whether you really need to materialize a List<T>.
Rather than creating a List<T> and operating on the contents of that list - consider writing a lazy extension method for IEnumerable<T> that performs the same processing logic. I've done this myself in a number of cases, and writing such logic in C# is not so bad when using the [yield return][1] syntax supported by the compiler.
This approach works well if all you're trying to do is visit each item in the results and collection some information from it. Often, what you need to do is just visit each element in the collection on demand, do some processing with it, and then move on. This approach is generally more scalable and performant that creating a copy of the collection just to iterate over it.
Now, this advice may not work for you for other reasons, but it's worth considering as an alternative to finding the most efficient way to materialize a very large list.
Don't pass an IEnumerable to the List constructor. IEnumerable has a ToList() method, which can't possibly do worse than that, and has nicer syntax (IMHO).
That said, that only changes the answer to your question to "it depends" - in particular, it depends on what the IEnumerable actually is behind the scenes. If it happens to be a List already, then ToList will effectively be free, of course will go much faster than if it were another type. It's still not super-fast.
The best way to solve this, of course, is to try to figure out how to do your processing on an IEnumerable rather than a List. That may not be possible.
Edit: Some people in the comments are debating whether or not ToList() will actually be any faster when called on a List than if not, and whether ToList() will be any faster than the list constructor. At this point, speculating is getting pointless, so here's some code:
using System;
using System.Linq;
using System.Collections.Generic;
public static class ToListTest
{
public static int Main(string[] args)
{
List<int> intlist = new List<int>();
for (int i = 0; i < 1000000; i++)
intlist.Add(i);
IEnumerable<int> intenum = intlist;
for (int i = 0; i < 1000; i++)
{
List<int> foo = intenum.ToList();
}
return 0;
}
}
Running this code with an IEnumerable that's really a List goes about 6-10 times faster than if I replace it with a LinkedList or Stack (on my pokey 2.4 GHz P4, using Mono 1.2.6). Conceivably this could be due to some unfortunate interaction between ToList() and the particular implementations of LinkedList or Stack's enumerations, but at least the point remains: speed will depend on the underlying type of the IEnumerable. That said, even with a List as the source, it still takes 6 seconds for me to make 1000 ToList() calls, so it's far from free.
The next question is whether ToList() is any more intelligent than the List constructor. The answer to that turns out to be no: the List constructor is just as fast as ToList(). In hindsight, Jon Skeet's reasoning makes sense - I was just forgetting that ToList() was an extension method. I still (much) prefer ToList() syntactically, but there's no performance reason to use it.
So the short version is that the best answer is still "don't convert to a List if you can avoid it". Barring that, actual performance will depend drastically on what the IEnumerable actually is, but at best it'll be sluggish, as opposed to glacial. I've amended my original answer to reflect this.
From reading the various comments and the question I get the following requirements
for a collection of data you need to run through that collection, filter out some objects and then perform some transformation on the remaining objects. If thats the case you can do something like this:
var result = from item in collection
where item.Id > 10 //or some more sensible condition
select Operation(item);
and if you need to the perform more filtering and transformation you can nest your LINQ queries like
var result = from filteredItem in (from item in collection
where item.Id > 10 //or some more sensible condition
select Operation(item))
where filteredItem.SomePropertyAvailableAfterFirstTransformation == "new"
select SecondTransfomation(filteredItem);
I've always been told that adding an element to an array happens like this:
An empty copy of the array+1element is
created and then the data from the
original array is copied into it then
the new data for the new element is
then loaded
If this is true, then using an array within a scenario that requires a lot of element activity is contra-indicated due to memory and CPU utilization, correct?
If that is the case, shouldn't you try to avoid using an array as much as possible when you will be adding a lot of elements? Should you use iStringMap instead? If so, what happens if you need more than two dimensions AND need to add a lot of element additions. Do you just take the performance hit or is there something else that should be used?
Look at the generic List<T> as a replacement for arrays. They support most of the same things arrays do, including allocating an initial storage size if you want.
This really depends on what you mean by "add."
If you mean:
T[] array;
int i;
T value;
...
if (i >= 0 && i <= array.Length)
array[i] = value;
Then, no, this does not create a new array, and is in-fact the fastest way to alter any kind of IList in .NET.
If, however, you're using something like ArrayList, List, Collection, etc. then calling the "Add" method may create a new array -- but they are smart about it, they don't just resize by 1 element, they grow geometrically, so if you're adding lots of values only every once in a while will it have to allocate a new array. Even then, you can use the "Capacity" property to force it to grow before hand, if you know how many elements you're adding (list.Capacity += numberOfAddedElements)
In general, I prefer to avoid array usage. Just use List<T>. It uses a dynamically-sized array internally, and is fast enough for most usage. If you're using multi-dimentional arrays, use List<List<List<T>>> if you have to. It's not that much worse in terms of memory, and is much simpler to add items to.
If you're in the 0.1% of usage that requires extreme speed, make sure it's your list accesses that are really the problem before you try to optimize it.
If you're going to be adding/removing elements a lot, just use a List. If it's multidimensional, you can always use a List<List<int>> or something.
On the other hand, lists are less efficient than arrays if what you're mostly doing is traversing the list, because arrays are all in one place in your CPU cache, where objects in a list are scattered all over the place.
If you want to use an array for efficient reading but you're going to be "adding" elements frequently, you have two main options:
1) Generate it as a List (or List of Lists) and then use ToArray() to turn it into an efficient array structure.
2) Allocate the array to be larger than you need, then put the objects into the pre-allocated cells. If you end up needing even more elements than you pre-allocated, you can just reallocate the array when it fills, doubling the size each time. This gives O(log n) resizing performance instead of O(n) like it would be with a reallocate-once-per-add array. Note that this is pretty much how StringBuilder works, giving you a faster way to continually append to a string.
When to abandon the use of arrays
First and foremost, when semantics of arrays dont match with your intent - Need a dynamically growing collection? A set which doesn't allow duplicates? A collection that has to remain immutable? Avoid arrays in all that cases. That's 99% of the cases. Just stating the obvious basic point.
Secondly, when you are not coding for absolute performance criticalness - That's about 95% of the cases. Arrays perform better marginally, especially in iteration. It almost always never matter.
When you're not forced by an argument with params keyword - I just wished params accepted any IEnumerable<T> or even better a language construct itself to denote a sequence (and not a framework type).
When you are not writing legacy code, or dealing with interop
In short, its very rare that you would actually need an array. I will add as to why may one avoid it?
The biggest reason to avoid arrays imo is conceptual. Arrays are closer to implementation and farther from abstraction. Arrays conveys more how it is done than what is done which is against the spirit of high level languages. That's not surprising, considering arrays are closer to the metal, they are straight out of a special type (though internally array is a class). Not to be pedagogical, but arrays really do translate to a semantic meaning very very rarely required. The most useful and frequent semantics are that of a collections with any entries, sets with distinct items, key value maps etc with any combination of addable, readonly, immutable, order-respecting variants. Think about this, you might want an addable collection, or readonly collection with predefined items with no further modification, but how often does your logic look like "I want a dynamically addable collection but only a fixed number of them and they should be modifiable too"? Very rare I would say.
Array was designed during pre-generics era and it mimics genericity with lot of run time hacks and it will show its oddities here and there. Some of the catches I found:
Broken covariance.
string[] strings = ...
object[] objects = strings;
objects[0] = 1; //compiles, but gives a runtime exception.
Arrays can give you reference to a struct!. That's unlike anywhere else. A sample:
struct Value { public int mutable; }
var array = new[] { new Value() };
array[0].mutable = 1; //<-- compiles !
//a List<Value>[0].mutable = 1; doesnt compile since editing a copy makes no sense
print array[0].mutable // 1, expected or unexpected? confusing surely
Run time implemented methods like ICollection<T>.Contains can be different for structs and classes. It's not a big deal, but if you forget to override non generic Equals correctly for reference types expecting generic collection to look for generic Equals, you will get incorrect results.
public class Class : IEquatable<Class>
{
public bool Equals(Class other)
{
Console.WriteLine("generic");
return true;
}
public override bool Equals(object obj)
{
Console.WriteLine("non generic");
return true;
}
}
public struct Struct : IEquatable<Struct>
{
public bool Equals(Struct other)
{
Console.WriteLine("generic");
return true;
}
public override bool Equals(object obj)
{
Console.WriteLine("non generic");
return true;
}
}
class[].Contains(test); //prints "non generic"
struct[].Contains(test); //prints "generic"
The Length property and [] indexer on T[] seem to be regular properties that you can access through reflection (which should involve some magic), but when it comes to expression trees you have to spit out the exact same code the compiler does. There are ArrayLength and ArrayIndex methods to do that separately. One such question here. Another example:
Expression<Func<string>> e = () => new[] { "a" }[0];
//e.Body.NodeType == ExpressionType.ArrayIndex
Expression<Func<string>> e = () => new List<string>() { "a" }[0];
//e.Body.NodeType == ExpressionType.Call;
Yet another one. string[].IsReadOnly returns false, but if you are casting, IList<string>.IsReadOnly returns true.
Type checking gone wrong: (object)new ConsoleColor[0] is int[] returns true, whereas new ConsoleColor[0] is int[] returns false. Same is true for uint[] and int[] comparisons. No such problems if you use any other collection types.
How to abandon the use of arrays.
The most commonly used substitute is List<T> which has a cleaner API. But it is a dynamically growing structure which means you can add to a List<T> at the end or insert anywhere to any capacity. There is no substitute for the exact behaviour of an array, but people mostly use arrays as readonly collection where you can't add anything to its end. A substitute is ReadOnlyCollection<T>.
When the array is resized, a new array must be allocated, and the contents copied. If you are only modifying the contents of the array, it is just a memory assignment.
So, you should not use arrays when you don't know the size of the array, or the size is likely to change. However, if you have a fixed length array, they are an easy way of retrieving elements by index.
ArrayList and List grow the array by more than one when needed (I think it's by doubling the size, but I haven't checked the source). They are generally the best choice when you are building a dynamically sized array.
When your benchmarks indicate that array resize is seriously slowing down your application (remember - premature optimization is the root of all evil), you can evaluate writing a custom array class with tweaked resizing behavior.
Generally, if you must have the BEST indexed lookup performance it's best to build a List first and then turn it into a array thus paying a small penalty at first but avoiding any later. If the issue is that you will be continually adding new data and removing old data then you may want to use a ArrayList or List for convenience but keep in mind that they are just special case Arrays. When they "grow" they allocate a completely new array and copy everything into it which is extremely slow.
ArrayList is just an Array which grows when needed.
Add is amortized O(1), just be careful to make sure the resize won't happen at a bad time.
Insert is O(n) all items to the right must be moved over.
Remove is O(n) all items to the right must be moved over.
Also important to keep in mind that List is not a linked list. It's just a typed ArrayList. The List documentation does note that it performs better in most cases but does not say why.
The best thing to do is to pick a data structure which is appropriate to your problem. This depends one a LOT of things and so you may want to browse the System.Collections.Generic Namespace.
In this particular case I would say that if you can come up with a good key value Dictionary would be your best bet. It has insert and remove that approaches O(1). However, even with a Dictionary you have to be careful not to let it resize it's internal array (an O(n) operation). It's best to give them a lot of room by specifying a larger-then-you-expect-to-use initial capacity in the constructor.
-Rick
A standard array should be defined with a length, which reserves all of the memory that it needs in a contiguous block. Adding an item to the array would put it inside of the block of already reserved memory.
Arrays are great for few writes and many reads, particularly those of an iterative nature - for anything else, use one of the many other data structures.
You are correct an array is great for look ups. However modifications to the size of the array are costly.
You should use a container that supports incremental size adjustments in the scenario where you're modifying the size of the array. You could use an ArrayList which allows you to set the initial size, and you could continually check the size versus the capacity and then increment the capacity by a large chunk to limit the number of resizes.
Or you could just use a linked list. Then however look ups are slow...
If I think I'm going to be adding items to the collection a lot over its lifetime, than I'll use a List. If I know for sure what the size of the collection will be when its declared, then I'll use an array.
Another time I generally use an array over a List is when I need to return a collection as a property of an object - I don't want callers adding items that collection via List's Add methods, but instead want them to add items to the collection via my object's interface. In that case, I'll take the internal List and call ToArray and return an array.
If you are going to be doing a lot of adding, and you will not be doing random access (such as myArray[i]). You could consider using a linked list (LinkedList<T>), because it will never have to "grow" like the List<T> implementation. Keep in mind, though, that you can only really access items in a LinkedList<T> implementation using the IEnumerable<T> interface.
The best thing you can do is to allocate as much memory as you need upfront if possible. This will prevent .NET from having to make additional calls to get memory on the heap. Failing that then it makes sense to allocate in chunks of five or whatever number makes sense for your application.
This is a rule you can apply to anything really.
In C# There seem to be quite a few different lists. Off the top of my head I was able to come up with a couple, however I'm sure there are many more.
List<String> Types = new List<String>();
ArrayList Types2 = new ArrayList();
LinkedList<String> Types4 = new LinkedList<String>();
My question is when is it beneficial to use one over the other?
More specifically I am returning lists of unknown size from functions and I was wondering if there is a particular list that was better at this.
List<String> Types = new List<String>();
LinkedList<String> Types4 = new LinkedList<String>();
are generic lists, i.e. you define the data type that would go in there which decreased boxing and un-boxing.
for difference in list vs linklist, see this --> When should I use a List vs a LinkedList
ArrayList is a non-generic collection, which can be used to store any type of data type.
99% of the time List is what you'll want. Avoid the non-generic collections at all costs.
LinkedList is useful for adding or removing without shuffling items around, although you have to forego random access as a result. One advantage it does have is you can remove items whilst iterating through the nodes.
ArrayList is a holdover from before Generics. There's really no reason to use them ... they're slow and use more memory than List<>. In general, there's probably no reason to use LinkedList either unless you are inserting midway through VERY large lists.
The only thing you'll find in .NET faster than a List<> is a fixed array ... but the performance difference is surprisingly small.
See the article on Commonly Used Collection Types from MSDN for a list of the the various types of collections available to you, and their intended uses.
ArrayList is a .Net 1.0 list type.
List is a generic list introduced with generics in .Net 2.0.
Generic lists provide better compile time support. Generics lists are type safe. You cannot add objects of wrong type. Therefor you know which type the stored objects has. There are no typechecks and typecasts nessecary.
I dont know about performance differences.
This questions says something about the difference of List and LinkedList.
As mentioned, don't use ArrayList if at all possible.
Here's an bit on Wikipedia about the differences between arrays and linked lists.
In summary:
Arrays
Fast random access
Fast inserting/deleting at end
Good memory locality
Linked Lists
Fast inserting/deleting at beginning
Fast inserting/deleting at end
Fast inserting/deleting at middle (with enumerator)
Generally, use List. Don't use ArrayList; it's obsolete. Use LinkedList in the rare cases where you need to be able to add without resizing and don't mind the overhead and loss of random access.
ArrayList is probably smaller, memory-wise, since it is based on an array. It also has fast random-access to elements. However, adding or removing to the list will take longer. This might be sped up slightly if the object over-allocates under the assumption that you are going to keep adding. (That will, of course, reduce the memory advantage.)
The other lists will be slightly larger (4-to-8 bytes more memory per element), and will have poor random access times. However, it is very fast to add or remove objects to the ends of the list. Also, memory usage is usually spot-on for what you need.