BlockingCollection contains only methods to add individual items. What if I want to add a collection? Should I just use foreach loop?
Why BlockingCollection doesn't contain method to add a collection? I think such method can be pretty useful.
ICollection interfaces and many of the BCL list-type classes don't have an AddRange method for some reason and it is annoying.
Yes, you'll need to foreach over the collection, you could write you own extension method if you're using it a lot.
The BlockingCollection<T> is a wrapper of an underlying collection that implements the IProducerConsumerCollection<T> interface, which by default is a ConcurrentQueue<T>. This interface has only the method TryAdd for adding elements. It doesn't have a TryAddRange. The reason is, I guess, because not all native IProducerConsumerCollection<T> implementation are equipped with AddRange functionality. The ConcurrentStack<T> does have the PushRange method, but the ConcurrentQueue<T> doesn't have something equivalent.
My understanding is that if this API existed, it would have atomic semantics. It wouldn't be just a convenience method for reducing the code needed for doing multiple non-atomic insertions. Quoting from the ConcurrentStack<T>.PushRange documentation:
When adding multiple items to the stack, using PushRange is a more efficient mechanism than using Push one item at a time. Additionally, PushRange guarantees that all of the elements will be added atomically, meaning that no other threads will be able to inject elements between the elements being pushed. Items at lower indices in the items array will be pushed before items at higher indices.
Atomicity is especially important when adding in a blocking-stack, because the item that is added can be immediately taken by another thread that is currently blocked. If I add the items A, B and C with the order C-B-A, it's because I want the A to be placed at the top of the stack, and picked by another thread first. But since the additions are not atomic, another thread could win the race and take the C before the current thread manages to add the B and the A. According to my experiments this happens rarely, but there is no way to prevent it. In case it is absolutely necessary to enforce the insertion order, there is no other option than implementing a custom BlockingStack<T> collection from scratch.
Related
In genereal, would using the ConcurrentBag type be an acceptable thread-safe substitute for a List? I have read some answers on here that have suggested the use of ConcurrentBag when one was having a problem with thread safety with generic Lists in C#.
After reading a bit about ConcurrentBag, however, it seems performing a lot of searches and looping through the collection does not match with its intended usage. It seems to mostly be intended to solve producer/consumer problems, where jobs are being (some-what randomly) added and removed from the collection.
This is an example of the type of (IEnumerable) operations I want to use with the ConcurrentBag:
...
private readonly ConcurrentBag<Person> people = new ConcurrentBag<Person>();
public void AddPerson(Person person)
{
people.Add(person);
}
public Person GetPersonWithName(string name)
{
return people.Where(x => name.Equals(x.Name)).FirstOrDefault();
}
...
Would this cause performance concerns, and is it even a correct way to use a ConcurrentBag collection?
.NET's in-built concurrent data structures are most designed for patterns like producer-consumer, where there is a constant flow of work through the container.
In your case, the list seems to be long-term (relative to the lifetime of the class) storage, rather than just a resting place for some data before a consumer comes along to take it away and do something with it. In this case I'd suggest using a normal List<T> (or whichever non-concurrent collection is most appropriate for the operations you're intending), and simply using locks to control access to it.
A Bag is just the most general form of collection, allowing multiple identical entries, and without even the ordering of a List. It does happen to be useful in producer/consumer contexts where fairness is not an issue, but it is not specifically designed for that.
Because a Bag does not have any structure with respect to its contents, it's not very suitable for performing searches. In particular, the use case you mention will require time that scales with the size of the bag. A HashSet might be better if you don't need to be able to store multiple copies of an item and if manual synchronization is acceptable for your use case.
As far as I understand the ConcurrentBag it makes use of multiple lists. It creates a List for each thread using the ConcurrentBag. Thus when reading or accessing the ConcurrentBag within the same thread again the performance should be roughly the same as when just using a normal List, but if the ConcurrentBag is accessed from a different thread there will be a performance overhead as it has to search for the value in the "internal" lists created for each thread.
The MSDN page says the following regarding the ConcurrentBag.
Bags are useful for storing objects when ordering doesn't matter, and unlike sets, bags support duplicates. ConcurrentBag is a thread-safe bag implementation, optimized for scenarios where the same thread will be both producing and consuming data stored in the bag.
http://msdn.microsoft.com/en-us/library/dd381779%28v=VS.100%29.aspx
In genereal, would using the ConcurrentBag type be an acceptable thread-safe substitute for a List?
No, not in general, because, concurring with Warren Dew, a List is ordered, while a Bag is not (surely mine isn't ;)
But in cases where (potentially concurrent) reads greatly outnumber writes, you could just copy-on-write wrap your List.
That is a general solution, as you are working with original List instances, except (as explained in more detail in above link) you have to make sure that everyone modifying the List uses the appropriate copy-on-write utility method - which you could enforce by using List.AsReadOnly().
In highly concurrent programs, copy-on-write has many desirable performance properties in mostly-read scenarios, compared to locking.
I'm working with large collections of objects and sequential reads of them.
I found most questions along these lines refer to multi-threading, but I am more concerned with errors within the thread itself due to misuse of a distributable library.
A system within the library manages a potentially large collection of objects, at one point it performs a sequential read of this collection performing an operation on each element.
Depending on the element implementation, which can be extended outside the library, an object may attempt to remove itself from the collection.
I would like that to be an option, but if this happens when the collection is being sequentially read this can lead to errors. I would like to be able to lock the contents of the collection while its being read and put any removal request on a schedule to be executed after the sequential read has finished.
The removal request has to go through the system since objects do not have public access to the collection, I could just go with an isReading flag but I wonder if there is a more elegant construct.
Does C# or .NET provide a tool to do this? perhaps to lock the list contents so I can intercept removal requests during sequential reads? or would I have to implement that behavior from scratch for this scenario?
You may want to look into using the SynchronizedCollection<T> class in .NET 2.0+.
Alternatively, have a look at the answer to this question: What is the difference between SynchronizedCollection<T> and the other concurrent collections?
You can use the next trick
List<T> collection;
for(int index = collection; index >= 0; --index)
{
var item = collection[index];
if(MUST BE DELETED)
{
collection.RemoveAt(index); // this is faster
OR
collection.Remove(item);
}
}
this code will not crash at collection modified and will process each item of collection
Quite simple: Other than ConcurrentDictionary (which I'll use if I have to but it's not really the correct concept), is there any Concurrent collection (IProducerConsumer implementation) that supports removal of specific items based on simple equality of an item or a predicate defining a condition for removal?
Explanation: I have a multi-threaded, multi-stage workflow algorithm, which pulls objects from the DB and sticks them in a "starting" queue. From there they are grabbed by the next stage, further worked on, and stuffed into other queues. This process continues through a few more stages. Meanwhile, the first stage is invoked again by its supervisor and pulls objects out of the DB, and those can include objects still in process (because they haven't finished being processed and so haven't been re-persisted with the flag set saying they're done).
The solution I am designing is a master "in work" collection; objects go in that queue when they are retrieved for processing by the first stage, and are removed after they have been re-saved to the DB as "processed" by whatever stage of the workflow completed the necessary processing. While the object is in that list, it will be ignored if it is re-retrieved by the first stage.
I had planned to use a ConcurrentBag, but the only removal method (TryTake) removes an arbitrary item from the bag, not a specified one (and ConcurrentBag is slow in .NET 4). ConcurrentQueue and ConcurrentStack also do not allow removal of an item other than the next one it'll give you, leaving ConcurrentDictionary, which would work but is more than I need (all I really need is to store the Id of the records being processed; they don't change during the workflow).
The reason why there is no such a data structure is that all collections have lookup operation time of O(n). These are IndexOf, Remove(element) etc. They all enumerate through all elements and checking them for equality.
Only hash tables have lookup time of O(1). In concurrent scenario O(n) lookup time would lead to very long lock of a collection. Other threads will not be able to add elements during this time.
In dictionary only the cell hit by hash will be locked. Other threads can continue adding while one is checking for equality through elements in hash cell.
My advice is go on and use ConcurrentDictionary.
By the way, you are right that ConcurrentDictionary is a bit oversized for your solution. What you really need is to check quickly weather an object is in work or not. A HashSet would be a perfect for that. It does basically nothing then Add(element), Contains(element), Remove(element). There is a ConcurrentHeshSet implementation in java. For c# I found this: How to implement ConcurrentHashSet in .Net don't know how good is it.
As a first step I would still write a wrapper with HashSet interface around ConcurrentDictionary bring it up and running and then try different implementations and see performance differences.
As already explained by it other posts its not possible to remove items from a Queue or ConcurrentQueue by default, but actually the easiest way to get around is to extend or wrap the item.
public class QueueItem
{
public Boolean IsRemoved { get; private set; }
public void Remove() { IsRemoved = true; }
}
And when dequeuing:
QueueItem item = _Queue.Dequeue(); // Or TryDequeue if you use a concurrent dictionary
if (!item.IsRemoved)
{
// Do work here
}
It's really hard to make a collection thread-safe in the generic sense. There are so many factors that go into thread-safety that are outside the responsibility or purview of a library/framework class that affect the ability for it to be truly "thread-safe"... One of the drawbacks as you've pointed out is the performance. It's impossible to write a performant collection that is also thread-safe because it has to assume the worst...
The generally recommended practice is to use whatever collection you want and access it in a thread-safe way. This is basically why there aren't more thread-safe collections in the framework. More on this can be found at http://blogs.msdn.com/b/bclteam/archive/2005/03/15/396399.aspx#9534371
I have a main thread that populates a List<T>. Further I create a chain of objects that will execute on different threads, requiring access to the List. The original list will never be written to after it's generated. My thought was to pass the list as IEnumerable<T> to the objects executing on other threads, mainly for the reason of not allowing those implementing those objects to write to the list by mistake. In other words if the original list is guaranteed not be written to, is it safe for multiple threads to use .Where or foreach on the IEnumerable?
I am not sure if the iterator in itself is thread safe if the original collection is never changed.
IEnumerable<T> can't be modified. So what can be non thread safe with it? (If you don't modify the actual List<T>).
For non thread safety you need writing and reading operations.
"Iterator in itself" is instantiated for each foreach.
Edit: I simplified my answer a bit, but #Eric Lippert added valuable comment. IEnumerable<T> doesn't define modifying methods, but it doesn't mean that access operators are thread safe (GetEnumerator, MoveNext and etc.) Simplest example: GetEnumerator implemented as this:
Every time returns same instance of IEnumerator
Resets it's position
More sophisticated example is caching.
This is interesting point, but fortunately I don't know any standard class that has not thread-safe implementation of IEnumerable.
Each thread that calls Where or foreach gets its own enumerator - they don't share one enumerator object for the same list. So since the List isn't being modified, and since each thread is working with its own copy of an enumerator, there should be no thread safety issues.
You can see this at work in one thread - Just create a List of 10 objects, and get two enumerators from that List. Use one enumerator to enumerate through 5 items, and use the other to enumerate through 5 items. You will see that both enumerators enumerated through only the first 5 items, and that the second one did not start where the first enumerator left off.
As long as you are certain that the List will never be modified then it will be safe to read from multiple threads. This includes the use of the IEnumerator instances it provides.
This is going to be true for most collections. In fact, all collections in the BCL should be stable during enumeration. In other words, the enumerator will not modify the data structure. I can think of some obscure cases, like a splay-tree, were enumerating it might modify the structure. Again, none of the BCL collections do that.
If you are certain that the list will not be modified after creation, you should guarantee that by converting it to a ReadOnlyCollection<T>. Of course if you keep the original list that the read only collection uses you can modify it, but if you toss the original list away you're effectively making it permentantly read only.
From the Thread Safety section of the collection:
A ReadOnlyCollection can support multiple readers concurrently, as long as the collection is not modified.
So if you don't touch the original list again and stop referencing it, you can ensure that multiple threads can read it without worry (so long as you don't do anything wacky with trying to modify it again).
In other words if the original list is guaranteed not be written to, is it safe for multiple threads to use .Where or foreach on the IEnumerable?
Yes it's only a problem if the list gets mutated.
But note than IEnumerable<T> can be cast back to a list and then modified.
But there is another alternative: wrap your list into a ReadOnlyCollection<T> and pass that around. If you now throw away the original list you basically created a new immutable list.
If you are using net framework 4.5 or greater, this could be a great soulution
http://msdn.microsoft.com/en-us/library/dd997305(v=vs.110).aspx
(microsoft already implemented a thread safe enumerable)
How does SynchronizedCollection<T> and the concurrent collections in the System.Collections.Concurrent namespace differ from each other, apart from Concurrent Collections being a namespace and SynchronizedCollection<T> being a class?
SynchronizedCollection<T> and all of the classes in Concurrent Collections provide thread-safe collections. How do I decide when to use one over the other, and why?
The SynchronizedCollection<T> class was introduced first in .NET 2.0 to provide a thread-safe collection class. It does this via locking so that you essentially have a List<T> where every access is wrapped in a lock statement.
The System.Collections.Concurrent namespace is much newer. It wasn't introduced until .NET 4.0 and it includes a substantially improved and more diverse set of choices. These classes no longer use locks to provide thread safety, which means they should scale better in a situation where multiple threads are accessing their data simultaneously. However, a class implementing the IList<T> interface is notably absent among these options.
So, if you're targeting version 4.0 of the .NET Framework, you should use one of the collections provided by the System.Collections.Concurrent namespace whenever possible. Just as with choosing between the various types of collections provided in the System.Collections.Generic namespace, you'll need to choose the one whose features and characteristics best fit your specific needs.
If you're targeting an older version of the .NET Framework or need a collection class that implements the IList<T> interface, you'll have to opt for the SynchronizedCollection<T> class.
This article on MSDN is also worth a read: When to Use a Thread-Safe Collection
The SynchronizedCollection<T> is a synchronized List<T>. It's a concept that can be devised in a second, and can be implemented fully in about one hour. Just wrap each method of a List<T> inside a lock (this), and you are done. Now you have a thread-safe collection, that can cover all the needs of a multithreaded application. Except that it doesn't.
The shortcomings of the SynchronizedCollection<T> become apparent as soon as you try to do anything non-trivial with it. Specifically as soon as you try to combine two or more methods of the collection for a conceptually singular operation. Then you realize that the operation is not atomic, and cannot be made atomic without resorting to explicit synchronization (locking on the SyncRoot property of the collection), which undermines the whole purpose of the collection. Some examples:
Ensure that the collection contains unique elements: if (!collection.Contains(x)) collection.Add(x);. This code ensures nothing. The inherent race condition between Contains and Add allows duplicates to occur.
Ensure that the collection contains at most N elements: if (collection.Count < N) collection.Add(x);. The race condition between Count and Add allows more than N elements in the collection.
Replace "Foo" with "Bar": int index = collection.IndexOf("Foo"); if (index >= 0) collection[index] = "Bar";. When a thread reads the index, its value is immediately stale. Another thread might change the collection in a way that the index points to some other element, or it's out of range.
At this point you realize that multithreading is more demanding than what you originally thought. Adding a layer of synchronization around the API of an existing collection doesn't cut it. You need a collection that is designed from the ground up for multithreaded usage, and has an API that reflects this design. This was the motivation for the introduction of the concurrent collections in .NET Framework 4.0.
The concurrent collections, for example the ConcurrentQueue<T> and the ConcurrentDictionary<K,V>, are highly sophisticated components. They are orders of magnitude more sophisticated than the clumsy SynchronizedCollection<T>. They are equipped with special atomic APIs that are well suited for multithreaded environments (TryDequeue, GetOrAdd, AddOrUpdate etc), and also with implementations that aim at minimizing the contention under heavy usage. Internally they employ lock-free, low-lock and granular-lock techniques. Learning how to use these collections requires some study. They are not direct drop-in replacements of their non-concurrent counterparts.
Caution: the enumeration of a SynchronizedCollection<T> is not synchronized. Getting an enumerator with GetEnumerator is synchronized, but using the enumerator is not. So if one thread does a foreach (var item in collection) while another thread mutates the collection in any way (Add, Remove etc), the behavior of the program is undefined. The safe way to enumerate a SynchronizedCollection<T> is to get a snapshot of the collection, and then enumerate the snapshot. Getting a snapshot is not trivial, because it involves two method calls (the Count getter and the CopyTo), so explicit synchronization is required. Beware of the LINQ ToArray operator, it's not thread-safe by itself. Below is a safe ToArraySafe extension method for the SynchronizedCollection<T> class:
/// <summary>Copies the elements of the collection to a new array.</summary>
public static T[] ToArraySafe<T>(this SynchronizedCollection<T> source)
{
ArgumentNullException.ThrowIfNull(source);
lock (source.SyncRoot)
{
T[] array = new T[source.Count];
source.CopyTo(array, 0);
return array;
}
}