Find Item in ObservableCollection without using a loop - c#

Currently i have the following syntax (list is a list containing objects with many different properties (where Title is one of them):
for (int i=0; i < list.Count; i++)
{
if(title == list[i].Title)
{
//do something
}
}
How can i access the list[i].Title without having to loop over my entire collection? Since my list tends to grow large this can impact the performance of my program.
I am having a lot of similar syntax across my program (accessing public properties trough a for loop and by index). But im a sure there must be a better and elegant way of doing this?
The find method does seem to be a option since my list contains objects.

I Don't know what do you mean exactly, but technially speaking, this is not possible without a loop.
May be you mean using a LINQ, like for example:
list.Where(x=>x.Title == title)
It's worth mentioning that the iteration over is not skipped, but simply wrapped into the LINQ query.
Hope this helps.
EDIT
In other words if you really concerned about performance, keep coding the way you already doing. Otherwise choose LINQ for more concise and clear syntax.

Here comes Linq:
var listItem = list.Single(i => i.Title == title);
It throws an exception if there's no item matching the predicate. Alternatively, there's SingleOrDefault.
If you want a collection of items matching the title, there's:
var listItems = list.Where(i => i.Title == title);

i had to use it for a condition add if you don't need the index
using System.Linq;
use
if(list.Any(x => x.Title == title){
// do something here
}
this will tell you if any variable satisfies your given condition.

I'd suggest storing these in a Hashtable. You can then access an item in the collection using the key, it's a much more efficient lookup.
var myObjects = new Hashtable();
myObjects.Add(yourObject.Title, yourObject);
...
var myRetrievedObject = myObjects["TargetTitle"];

Consider creating an index. A dictionary can do the trick. If you need the list semantics, subclass and keep the index as a private member...

ObservableCollection is a list so if you don't know the element position you have to look at each element until you find the expected one.
Possible optimization
If your elements are sorted use a binary search to improve performances otherwise use a Dictionary as index.

You're looking for a hash based collection (like a Dictionary or Hashset) which the ObservableCollection is not. The best solution might be to derive from a hash based collection and implement INotifyCollectionChanged which will give you the same behavior as an ObservableCollection.

Well if you have N objects and you need to get the Title of all of them you have to use a loop. If you only need the title and you really want to improve this, maybe you can make a separated array containing only the title, this would improve the performance.
You need to define the amount of memory available and the amount of objects that you can handle before saying this can damage the performance, and in any case the solution would be changing the design of the program not the algorithm.

Maybe this approach would solve the problem:
int result = obsCollection.IndexOf(title);
IndexOf(T)
Searches for the specified object and returns the zero-based index of the first occurrence within the entire Collection.
(Inherited from Collection)
https://learn.microsoft.com/en-us/dotnet/api/system.collections.objectmodel.observablecollection-1?view=netframework-4.7.2#methods

An observablecollection can be a List
{
BuchungsSatz item = BuchungsListe.ToList.Find(x => x.BuchungsAuftragId == DGBuchungenAuftrag.CurrentItem.Id);
}

Related

Efficient way to iterate over an array for a specific member

In my last project I have found myself iterating over many Arrays or Lists of strings in order to find a specific string within.
I have got to ask, is there any way less than O(n) in order to find a specific member inside an array?
An O(n) solution in C# (consider there are no duplicates):
foreach(string st in Arr)
{
if (st=="Hello")
{
Console.WriteLine("Hey!");
break;
}
}
EDIT : I didn't ask my question quite right. I wish to also change the member I wish to find and not only look him up.
So my snippet changes to:
foreach(string st in Arr)
{
if (st=="Hello")
{
st="Changed";
break;
}
}
Can you somehow get to O(logn)? If so, can you further explain how it is done and how is it more effiecient than my solution.
Thanks for any light on that matter!
is there any way less than O(n) in order to find a specific member
inside an array?
If you have a collection of strings with no regards to duplicates, and sort isn't a matter, you should use a HashSet<T>, where with a normal distribution you should be at O(1) for lookups:
var hashSet = new HashSet<string> { "A", "B", "C" };
if (hashSet.Contains("A"))
{
Console.WriteLine("hey");
}
In case you need more than lookups, e.g accessing a member at a specific index, then HashSet<T> should not be your pick. You need to specify exactly what you're going to be doing with the collection if you want a more elaborate answer.
There is a project called "C5 Generic Collection Library" at the University of Copenhagen. They have implemented an extended set of collection classes that may help you such as a HashSet that allows duplicates called hashbag and they have a hash-indexed array list which may prove useful...

C# List .ConvertAll Efficiency and overhead

I recently learned about List's .ConvertAll extension. I used it a couple times in code today at work to convert a large list of my objects to a list of some other object. It seems to work really well. However I'm unsure how efficient or fast this is compared to just iterating the list and converting the object. Does .ConvertAll use anything special to speed up the conversion process or is it just a short hand way of converting Lists without having to set up a loop?
No better way to find out than to go directly to the source, literally :)
http://referencesource.microsoft.com/#mscorlib/system/collections/generic/list.cs#dbcc8a668882c0db
As you can see, there's no special magic going on. It just iterates over the list and creates a new item by the converter function that you specify.
To be honest, I was not aware of this method. The more idiomatic .NET way to do this kind of projection is through the use of the Select extension method on IEnumerable<T> like so: source.Select(input => new Something(input.Name)). The advantage of this is threefold:
It's more idomatic as I said, the ConvertAll is likely a remnant of the pre-C#3.0 days. It's not a very arcane method by any means and ConvertAll is a pretty clear description, but it might still be better to stick to what other people know, which is Select.
It's available on all IEnumerable<T>, while ConvertAll only works on instances of List<T>. It doesn't matter if it's an array, a list or a dictionary, Select works with all of them.
Select is lazy. It doesn't do anything until you iterate over it. This means that it returns an IEnumerable<TOutput> which you can then convert to a list by calling ToList() or not if you don't actually need a list. Or if you just want to convert and retrieve the first two items out of a list of a million items, you can simply do source.Select(input => new Something(input.Name)).Take(2).
But if your question is purely about the performance of converting a whole list to another list, then ConvertAll is likely to be somewhat faster as it's less generic than a Select followed by a ToList (it knows that a list has a size and can directly access elements by index from the underlying array for instance).
Decompiled using ILSPy:
public List<TOutput> ConvertAll<TOutput>(Converter<T, TOutput> converter)
{
if (converter == null)
{
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.converter);
}
List<TOutput> list = new List<TOutput>(this._size);
for (int i = 0; i < this._size; i++)
{
list._items[i] = converter(this._items[i]);
}
list._size = this._size;
return list;
}
Create a new list.
Populate the new list by iterating over the current instance, executing the specified delegate.
Return the new list.
Does .ConvertAll use anything special to speed up the conversion
process or is it just a short hand way of converting Lists without
having to set up a loop?
It doesn't do anything special with regards to conversion (what "special" thing could it do?) It is directly modifying the private _items and _size members, so it might be trivially faster under some circumstances.
As usual, if the solution makes you more productive, code easier to read, etc. use it until profiling reveals a compelling performance reason to not use it.
It's the second way you described it - basically a short-hand way without setting up a loop.
Here's the guts of ConvertAll():
List<TOutput> list = new List<TOutput>(this._size);
for (int index = 0; index < this._size; ++index)
list._items[index] = converter(this._items[index]);
list._size = this._size;
return list;
Where TOutput is whatever type you're converting to, and converter is a delegate indicating the method that will do the conversion.
So it loops through the List you passed in, running each element through the method you specify, and then returns a new List of the specified type.
For precise timing in your scenarios you need to measure yourself.
Do not expect any miracles - it have to be O(n) operation since each element need to be converted and added to destination list.
Consider using Enumerable.Select instead as it will do lazy evaluation that may allow avoiding second copy of large list, especially you you need to do any filtering of items along the way.

c# list methods: ElementAt(index) vs Find(content)

I am currently building a data structure which relies a lot on efficiency.
Can anyone provide me with resources on how the Find(item => item.X = myObject.Property) method actually works?
Does it iterate linearly throughout all elements until it finds the element?
And what if I know the index of myObject and I use ElementAt(index)?
Which will be the most efficient of these two please?
From the MSDN documentation on List<T>.Find
This method performs a linear search; therefore, this method is an O(n) operation, where n is Count.
I imagine that ElementAt is optimized for IList and will do a direct index. But since you're apparently using this object from the List concrete type anyway, why not just do a direct index? Like this:
var result = list[index];
If you already know the index, there is no point to searching. Just go straight to it.

How can I check a List collection for an object that has a particular property?

I have a List<IAgStkObject>. Each IAgStkObject has a property called InstanceName. How can I search through my List to find if any of the contained IAgStkObject(s) have a particular InstanceName? In the past I would have used a foreach loop.. but this seems too slow.
WulfgarPro
If the only thing you have is a List (not ordered by InstanceName), there is no faster way (if you do similar tests often, you can preprocess the data and create e.g. a Dictionary indexed by the InstanceName).
The only way different from “the past” would be those useful extension methods allowing you to write just
return myList.Any(item => item.InstanceName == "Searched name");
If the list is sorted by the InstanceName, you can use binary search algorithm, otherwise: no.
You would have to use some more advanced data structure (like the sorted list or dictionary). I think dictionary would be the solution for this. It is very fast and easy to use.
But think: how many of the objects do you have? Are you sure looping through them is performance issue? If you have < 1000 of the objects, you absolutely don't have to worry (unless you want to do something in real time).
You can use Linq:
list.Any(o => o.InstanceName == "something")
But you cannot avoid looping through the list (in the Linq case it's done implicitly). If you want a performance gain, change your data structure. Maybe a dictionary (InstanceName -> IAgStkObject) is appropriate?

Fastest way to find out whether two ICollection<T> collections contain the same objects

What is the fastest way to find out whether two ICollection<T> collections contain precisely the same entries? Brute force is clear, I was wondering if there is a more elegant method.
We are using C# 2.0, so no extension methods if possible, please!
Edit: the answer would be interesting both for ordered and unordered collections, and would hopefully be different for each.
use C5
http://www.itu.dk/research/c5/
ContainsAll
" Check if all items in a
supplied collection is in this bag
(counting multiplicities).
The
items to look for.
True if all items are
found."
[Tested]
public virtual bool ContainsAll<U>(SCG.IEnumerable<U> items) where U : T
{
HashBag<T> res = new HashBag<T>(itemequalityComparer);
foreach (T item in items)
if (res.ContainsCount(item) < ContainsCount(item))
res.Add(item);
else
return false;
return true;
}
First compare the .Count of the collections if they have the same count the do a brute force compare on all elements. Worst case scenarios is O(n). This is in the case the order of elements needs to be the same.
The second case where the order is not the same, you need to use a dictionary to store the count of elements found in the collections: Here's a possible algorithm
Compare collection Count : return false if they are different
Iterate the first collection
If item doesn't exist in dictionary then add and entry with Key = Item, Value = 1 (the count)
If item exists increment the count for the item int the dictionary;
Iterate the second collection
If item is not in the dictionary the then return false
If item is in the dictionary decrement count for the item
If count == 0 the remove item;
return Dictionary.Count == 0;
For ordered collections, you can use the SequenceEqual() extension method defined by System.Linq.Enumerable:
if (firstCollection.SequenceEqual(secondCollection))
You mean the same entries or the same entries in the same order?
Anyway, assuming you want to compare if they contain the same entries in the same order, "brute force" is really your only option in C# 2.0. I know what you mean by non elegant, but if the atomic comparision itself is O(1), the whole process should be in O(N), which is not that bad.
If the entries need to be in the same order (besides being the same), then I suggest - as an optimization - that you iterate both collections at the same time and compare the current entry in each collection. Otherwise, the brute force is the way to go.
Oh, and another suggestion - you could override Equals for the collection class and implement the equality stuff in there (depends on you project, though).
Again, using the C5 library, having two sets, you could use:
C5.ICollection<T> set1 = C5.ICollection<T> ();
C5.ICollection<T> set2 = C5.ICollecton<T> ();
if (set1.UnsequencedEquals (set2)) {
// Do something
}
The C5 library includes a heuristic that actually tests the unsequenced hash codes of the two sets first (see C5.ICollection<T>.GetUnsequencedHashCode()) so that if the hash codes of the two sets are unequal, it doesn't need to iterate over every item to test for equality.
Also something of note to you is that C5.ICollection<T> inherits from System.Collections.Generic.ICollection<T>, so you can use C5 implementations while still using the .NET interfaces (though you have access to less functionality through .NET's stingy interfaces).
Brute force takes O(n) - comparing all elements (assuming they are sorted), which I would think is the best you could do - unless there is some property of the data that makes it easier.
I guess for the case of not sorted, its O(n*n).
In which case, I would think a solution based around a merge sort would probably help.
For example, could you re-model it so that there was only one collection? Or 3 collections, one for those in collection A only, one for B only and for in both - so if the A only and B only are empty - then they are the same... I am probably going off on totally the wrong tangent here...

Categories