What is the fastest way of changing Dictionary<K,V>? - c#

This is an algorithmic question.
I have got Dictionary<object,Queue<object>>. Each queue contains one or more elements in it. I want to remove all queues with only one element from the dictionary. What is the fastest way to do it?
Pseudo-code: foreach(item in dict) if(item.Length==1) dict.Remove(item);
It is easy to do it in a loop (not foreach, of course), but I'd like to know which approach is the fastest one here.
Why I want it: I use that dictionary to find duplicate elements in a large set of objects. The Key in dictionary is kind of a hash of the object, the Value is a queue of all objects found with the same hash. Since I want only duplicates, I need to remove all items with just a single object in associated queue.
Update:
It may be important to know that in a regular case there are just a few duplicates in a large set of objects. Let's assume 1% or less. So possibly it could be faster to leave the Dictionary as is and create a new one from scatch with just selected elements from the first one... and then deelte the first Dictionary completely. I think it depends on the comlpexity of computational Dictionary class's methods used in particular algorithms.
I really want to see this problem on a theoretical level because as a teacher I want to discuss it with students. I didn't provide any concrete solution myself because I think it is really easy to do it. The question is which approach is the best, the fastest.

var itemsWithOneEntry = dict.Where(x => x.Value.Count == 1)
.Select(x => x.Key)
.ToList();
foreach (var item in itemsWithOneEntry) {
dict.Remove(item));
}

It stead of trying to optimize the traversing of the collection how about optimizing the content of the collection so that it only includes the duplicates? This would require changing your collection algorithm instead to something like this
var duplicates = new Dictionary<object,Queue<object>>;
var possibleDuplicates = new Dictionary<object,object>();
foreach(var item in original){
if(possibleDuplicates.ContainsKey(item)){
duplicates.Add(item, new Queue<object>{possibleDuplicates[item],item});
possibleDuplicates.Remove(item);
} else if(duplicates.ContainsKey(item)){
duplicates[item].Add(item);
} else {
possibleDuplicates.Add(item);
}
}

Note that you should probably measure the impact of this on the performance in a realistic scenario before you bother to make your code any more complex than it really needs to be. Most imagined performance problems are not in fact the real cause of slow code.
But supposing you do find that you could get a speed advantage by avoiding a linear search for queues of length 1, you could solve this problem with a technique called indexing.
As well as your dictionary containing all the queues, you maintain an index container (probably another dictionary) that only contains the queues of length 1, so when you need them they are already available separately.
To do this, you need to enhance all the operations that modify the length of the queue, so that they have the side-effect of updating the index container.
One way to do it is to define a class ObservableQueue. This would be a thin wrapper around Queue except it also has a ContentsChanged event that fires when the number of items in the queue changes. Use ObservableQueue everywhere instead of the plain Queue.
Then when you create a new queue, enlist on its ContentsChanged event a handler that checks to see if the queue only has one item. Based on this you can either insert or remove it from the index container.

Related

Which collection would be appropriate for this task?

I am working with C# and now trying to improve an algorithm (different story there), and to do that I need to have this data structure:
As you can see it is a linked list, where each node can have zero or one "follower"(the right ones). I am still thinking if more than one is necessary.
I could implement these linked lists by myself "raw" but I am thinking it would be much better if I use a collection from the ones available (such as List etc).
So far I am thinking of building a class "PairClass" which will have the a "first element" and a "follower". (the left node and right node). This could change if I decide to include more than one linked nodes(followers). Then using a List<PairClass>
One final consideration is that it would be nice if the data collection permits me to get the follower by giving the first element in an efficient manner.
Due to this last consideration, I am not sure if List<PairClass> would be the best approach.
Can someone advice me on what to use in these cases? I am always open to learn and discuss better ways of doing things. Basically I am asking for an efficient solution to the problem
EDIT: (in response to the comments)
How do you identify each node, is there an ID? or will the index in a list suffice?
So far, I am content with using just simple integers. But I guess you are right, you just give me an idea and perhaps the solution I need is simpler than I thought!
What are your use cases? How often will you be adding or removing elements? Are you going to iterate over this collection?
I will be adding elements often. The "follower" would likely be replaced often too. The elements are not going to be removed. I am going to iterate over this collection in the end, the reason being that followers are going to be eliminated as elements of consideration and replaced by their first element
(Note aside). The reason I am doing this is because I need to modify an algorithm that is taking too much time, This algorithm performs too many scans on an image (which takes time) so I plan to build this structure to solve the problem, therefore speed is a consideration.
You really need to add more details, however by your description
If you don't need to iterate over the list in order
If you have a key for each node
If you want fast lookups
You could use a Dictionary<Key,Node>
Node
public class Node
{
// examples
public string Id {get;set;}
public Node Parent {get;set;}
public Node Child {get;set;}
public Node Sibling {get;set;}
}
Option 1
var nodes = new Dictionary<string,Node>();
// add like this
nodes.Add(node.Id,node);
// look up like this
node = nodes[key];
// access its relatives
node.Parent
node.Child
Node.Sibling
If you want to iterate over the list often
If the index is all you need to look up the node
Or if you want to query the list via Linq
Option 2
var list = new List<Node>;
// lookup via index
var node = list[index];
// lookup via Linq
var node = list.FirstOrDefault(x => x.Id == someId)
In case it is a single follower scenario then I would suggest dictionary of list as a possible candidate as dictionary will make it accessible faster vertically and being a single follower list you can easily use a link list.
In case it is a multiple follower scenario I would suggest dictionary of dictionary collection which will make whole collection faster to access both vertically or horizontally.
Saruman gave a fairly good example of implementation.

Poor use of dictionary?

I've read on here that iterating though a dictionary is generally considered abusing the data structure and to use something else.
However, I'm having trouble coming up with a better way to accomplish what I'm trying to do.
When a tag is scanned I use its ID as the key and the value is a list of zones it was seen in. About every second I check to see if a tag in my dictionary has been seen in two or more zones and if it has, queue it up for some calculations.
for (int i = 0; i < TagReads.Count; i++)
{
var tag = TagReads.ElementAt(i).Value;
if (tag.ZoneReads.Count > 1)
{
Report.Tags.Enqueue(tag);
Boolean success = false;
do
{
success = TagReads.TryRemove(tag.Epc, out TagInfo outTag);
} while (!success);
}
}
I feel like a dictionary is the correct choice here because there can be many tags to look up but something about this code nags me as being poor.
As far as efficiency goes. The speed is fine for now in our small scale test environment but I don't have a good way to find out how it will work on a massive scale until it is put to use, hence my concern.
I believe that there's an alternative approach which doesn't involve iterating a big dictionary.
First of all, you need to create a HashSet<T> of tags on which you'll store those tags that have been detected in more than 2 zones. We'll call it tagsDetectedInMoreThanTwoZones.
And you may refactor your code flow as follows:
A. Whenever you detect a tag in one zone...
Add the tag and the zone to the main dictionary.
Create an exclusive lock against tagsDetectedInMoreThanTwoZones to avoid undesired behaviors in B..
Check if the key has more than one zone. If this is true, add it to tagsDetectedInMoreThanTwoZones.
Release the lock against tagsDetectedInMoreThanTwoZones.
B. Whenever you need to process a tag which has been detected in more than one zone...
Create an exclusive lock against tagsDetectedInMoreThanTwoZones to avoid more than a thread trying to process them.
Iterate tagsDetectedInTwoOrMoreZones.
Use each tag in tagsDetectedInMoreThanTwoZones to get the zones in your current dictionary.
Clear tagsDetectedInMoreThanTwoZones.
Release the exclusive lock against tagsDetectedInMoreThanTwoZones.
Now you'll iterate those tags that you already know that have been detected in more than a zone!
In the long run, you can even make per-region partitions so you never get a tagsDetectedInMoreThanTwoZones set with too many items to iterate, and each set could be consumed by a dedicated thread!
If you are going to do a lot of lookup in your code and only sometimes iterate through the whole thing, then I think the dictionary use is ok. I would like to point out thought that your use of ElementAt is more alarming. ElementAt performs very poorly when used on objects that do not implement IList<T> and the dictionary does not. For IEnumerables<T> that do not implement IList the way the nth element is found is through iteration, so your for-loop will iterate the dictionary once for each element. You are better off with a standard foreach.
I feel like this is a good use for a dictionary, giving you good access speed when you want to check if an ID is already in the collection.

Does foreach loop work more slowly when used with a not stored list or array?

I am wondered at if foreach loop works slowly if an unstored list or array is used as an in array or List.
I mean like that:
foreach (int number in list.OrderBy(x => x.Value)
{
// DoSomething();
}
Does the loop in this code calculates the sorting every iteration or not?
The loop using stored value:
List<Tour> list = tours.OrderBy(x => x.Value) as List<Tour>;
foreach (int number in list)
{
// DoSomething();
}
And if it does, which code shows the better performance, storing the value or not?
This is often counter-intuitive, but generally speaking, the option that is best for performance is to wait as long as possible to materialize results into a concrete structure like a list or array. Please keep in mind that this is a generalization, and so there are plenty of cases where it doesn't hold. Nevertheless, the first instinct is better when you avoid creating the list for as long as possible.
To demonstrate with your sample, we have these two options:
var list = tours.OrderBy(x => x.Value).ToList();
foreach (int number in list)
{
// DoSomething();
}
vs this option:
foreach (int number in list.OrderBy(x => x.Value))
{
// DoSomething();
}
To understand what is going on here, you need to look at the .OrderBy() extension method. Reading the linked documentation, you'll see it returns a IOrderedEnumerable<TSource> object. With an IOrderedEnumerable, all of the sorting needed for the foreach loop is already finished when you first start iterating over the object (and that, I believe, is the crux of your question: No, it does not re-sort on each iteration). Also note that both samples use the same OrderBy() call. Therefore, both samples have the same problem to solve for ordering the results, and they accomplish it the same way, meaning they take exactly the same amount of time to reach that point in the code.
The difference in the code samples, then, is entirely in using the foreach loop directly vs first calling .ToList(), because in both cases we start from an IOrderedEnumerable. Let's look closely at those differences.
When you call .ToList(), what do you think happens? This method is not magic. There is still code here which must execute in order to produce the list. This code still effectively uses it's own foreach loop that you can't see. Additionally, where once you only needed to worry about enough RAM to handle one object at a time, you are now forcing your program to allocate a new block of RAM large enough to hold references for the entire collection. Moving beyond references, you may also potentially need to create new memory allocations for the full objects, if you were reading a from a stream or database reader before that really only needed one object in RAM at a time. This is an especially big deal on systems where memory is the primary constraint, which is often the case with web servers, where you may be serving and maintaining session RAM for many many sessions, but each session only occasionally uses any CPU time to request a new page.
Now I am making one assumption here, that you are working with something that is not already a list. What I mean by this, is the previous paragraphs talked about needing to convert an IOrderedEnumerable into a List, but not about converting a List into some form of IEnumerable. I need to admit that there is some small overhead in creating and operating the state machine that .Net uses to implement those objects. However, I think this is a good assumption. It turns out to be true far more often than we realize. Even in the samples for this question, we're paying this cost regardless, by the simple virtual of calling the OrderBy() function.
In summary, there can be some additional overhead in using a raw IEnumerable vs converting to a List, but there probably isn't. Additionally, you are almost certainly saving yourself some RAM by avoiding the conversions to List whenever possible... potentially a lot of RAM.
Yes and no.
Yes the foreach statement will seem to work slower.
No your program has the same total amount of work to do so you will not be able to measure a difference from the outside.
What you need to focus on is not using a lazy operation (in this case OrderBy) multiple times without a .ToList or ToArray. In this case you are only using it once(foreach) but it is an easy thing to miss.
Edit: Just to be clear. The as statement in the question will not work as intended but my answer assumes no .ToList() after OrderBy .
This line won't run:
List<Tour> list = tours.OrderBy(x => x.Value) as List<Tour>; // Returns null.
Instead, you want to store the results this way:
List<Tour> list = tours.OrderBy(x => x.Value).ToList();
And yes, the second option (storing the results) will enumerate much faster as it will skip the sorting operation.

LINQ ToDictionary initial capacity

I regularly use the LINQ extension method ToDictionary, but am wondering about the performance. There is no parameter to define the capacity for the dictionary and with a list of 100k items or more, this could become an issue:
IList<int> list = new List<int> { 1, 2, ... , 1000000 };
IDictionary<int, string> dictionary = list.ToDictionary<int, string>(x => x, x => x.ToString("D7"));
Does the implementation actually take the list.Count and passes it to the constructor for the dictionary?
Or is the resizing of the dictionary fast enough, so I don't really have to worry about it?
Does the implementation actually take the list.Count and passes it to
the constructor for the dictionary?
No. According to ILSpy, the implementation is basically this:
Dictionary<TKey, TElement> dictionary = new Dictionary<TKey, TElement>(comparer);
foreach (TSource current in source)
{
dictionary.Add(keySelector(current), elementSelector(current));
}
return dictionary;
If you profile your code and determine that the ToDictionary operation is your bottleneck, its trivial to make your own function based on the above code.
Does the implementation actually take the list.Count and passes it to the constructor for the dictionary?
This is an implementation detail and it shouldn't matter to you.
Or is the resizing of the dictionary fast enough, so I don't really have to worry about it?
Well, I don't know. Only you know whether or not this is actually a bottleneck in your application, and whether or not the performance is acceptable. If you want to know if it's fast enough, write the code and time it. As Eric Lippert is wont to say, if you want to know how fast two horses are, do you pit them in a race against each other, or do you ask random strangers on the Internet which one is faster?
That said, I'm having a really hard time imaging this being a bottleneck in any realistic application. If adding items to a dictionary is a bottleneck in your application, you're doing something wrong.
I don't think it'll be a bottleneck TBH. And in case you have real complaints and issues, you should look into it at that time to see if you can improve it, may be you can do paging instead of converting everything at once.
I don't know about resizing the dictionary, but checking the implementation with dotPeek.exe suggests that the implementation does not take the list length.
What the code basically does is:
create a new dictionary
iterate over sequence and add items
If you find this a bottleneck, it would be trivial to create your own extension method ToDictionaryWithCapacity that works on something that can have its length actually computed without iterating the whole thing.
Just scanned the Dictionary implementation. Basically, when it starts to fill up, the internal list is resized by roughly doubling it to a near prime. So that should not happen too frequently.
Does the implementation actually take the list.Count and passes it to the constructor for the dictionary?
It doesn't. That's because the calling Count() would enumerate the source, and then adding it to the dictionary would enumerate the source a second time. It's not a good idea to enumerate the source twice, for example this would fail on DataReaders.
Or is the resizing of the dictionary fast enough, so I don't really have to worry about it?
The Dictionary.Resize method is used to expand the dictionary. It allocates a new dictionary and copies the existing items into the new dictionary (using Array.Copy). The dictionary size is increased in prime number steps.
This is not the fastest way, but fast enough if you do not know the size.

C# Dictionary Loop Enhancment

I have a dictionary with around 1 milions items. I am constantly looping throw the dictionnary :
public void DoAllJobs()
{
foreach (KeyValuePair<uint, BusinessObject> p in _dictionnary)
{
if(p.Value.MustDoJob)
p.Value.DoJob();
}
}
The execution is a bit long, around 600 ms, I would like to deacrese it. Here is the contraints :
MustDoJob values mostly stay the same beetween two calls to DoAllJobs()
60-70% of the MustDoJob values == false
From time to times MustDoJob change for 200 000 pairs.
Some p.Value.DoJob() can not be computed at the same time (COM object call)
Here, I do not need the key part of the _dictionnary objet but I really do need it somewhere else
I wanted to do the following :
Parallelizes but I am not sure is going to be effective due to 4.
Sorts the dictionnary since 1. and 2. (and stop want I find the first MustDoJob == false) but I am wondering what 3. would result in
I did not implement any of the previous ideas since it could be a lot of job and I would like to investigate others options before. So...any ideas ?
What I would suggest is that your business object could raise an event to indicate that it needs to do a job when MustDoJob becomes true and you can subscribe to that event and store references to those objects in a simple list and then process the contents of that list when the DoAllJobs() method is called
My first suggestion would be to use just the values from the dictionary:
foreach (BusinessObject> value in _dictionnary.Values)
{
if(value.MustDoJob)
{
value.DoJob();
}
}
With LINQ this could be even easier:
foreach (BusinessObject value in _dictionnary.Values.Where(v => v.MustDoJob))
{
value.DoJob();
}
That makes it clearer. However, it's not clear what else is actually causing you a problem. How quickly do you need to be able to iterate over the dictionary? I expect it's already pretty nippy... is anything actually wrong with this brute force approach? What's the impact of it taking 600ms to iterate over the collection? Is that 600ms when nothing needs to do any work?
One thing to note: you can't change the contents of the dictionary while you're iterating over it - whether in this thread or another. That means not adding, removing or replacing key/value pairs. It's okay for the contents of a BusinessObject to change, but the dictionary relationship between the key and the object can't change. If you want to minimise the time during which you can't modify the dictionary, you can take a copy of the list of references to objects which need work doing, and then iterate over that:
foreach (BusinessObject value in _dictionnary.Values
.Where(v => v.MustDoJob)
.ToList())
{
value.DoJob();
}
Try using a profiler first. 4 makes me curious - 600ms may not be that much if the COM object uses most of the time, and then it is either paralellize or live with it.
I would get sure first - with a profiler run - that you dont target the totally wrong issue here.
Having established that the loop really is the problem (see TomTom's answer), I would maintain a list of the items on which MustDoJob is true -- e.g., when MustDoJob is set, add it to the list, and when you process and clear the flag, remove it from the list. (This might be done directly by the code manipulating the flag, or by raising an event when the flag changes; depends on what you need.) Then you loop through the list (which is only going to be 60-70% of the length), not the dictionary. The list might contain the object itself or just its key in the dictionary, although it will be more efficient if it holds the object itself as you avoid the dictionary lookup. It does depend on how frequently you're queuing 200k of them, and how time-critical the queuing vs. the execution is.
But again: Step 1 is make sure you're solving the right problem.
The use of a dictionary to me implies that the intention is to find items by a key, rather than visit every item. On the other hand, 600ms for looping through a million items is respectable.
Perhaps alter your logic so that you can simply pick the relevant items satisfying the condition directly out of the dictionary.
Use a List of KeyValuePairs instead. This means you can iterate over it super-quickly by doing
List<KeyValuePair<string,object>> list = ...;
int totalItems = list.Count;
for (int x = 0; x < totalItems; x++)
{
// whatever you plan to do with them, you have access to both KEY and VALUE.
}
I know this post is old, but I was looking for a way to iterate over a dictionary without the increased overhead of the Enumerator being created (GC and all), or generally a faster way to iterate over it.

Categories