Updating concurrent collection - c#

I am working on a multi-thread application, where I load data from external feeds and store them in internal collections.
These collections are updated once per X minutes, by loading all data from the external feeds again.
There is no other adding/removing from these collection, just reading.
Normally I would use locking during the updating, same as everywhere I am accessing the collections.
Question:
Do the concurrent collections make my life easier in this case?
Basically I see two approaches
Load the data from external feed and then remove the items which are not present anymore, add the missing, and update the changed - I guess this is a good solution with help of concurrent collection (no locking required, right?), but it require too much code from my side.
Simply override the old collection object with a new one (e.g. _data = new ConcurentBag(newData). Here I am quite sure that using the concurrent collections have no advantage at all, am I right? Locking mechanism is required.
Is there out of the box solution I can use, using the concurrent collections? I would not like to reinvent the wheel again.

Yes, for concurrent collections the locking mechanism is stored inside the collections, so if you new up a collection in place of the old one, that just defeats the purpose. They are mostly used in producer-consumer situations, usually in combination with a BlockingCollection<T>. If your producer does more than just add data, it makes things a bit more complicated.
The benefit to not using concurrent collections is that your locking mechanism no longer depends on the collection - you can have a separate synchronization object that you lock on, and inside the critical section you're free to assign another instance like you wanted.
To answer your question - I don't know of any out-of-the-box mechanism to do what you want, but I wouldn't call using a simple lock statement "reinventing the wheel". That's a bit like saying that using for loops is reinventing the wheel. Just have a separate synchronization object alongside your non-concurrent collection.

Related

Dictionary vs Concurrent Dictionary

I am trying to understand when to use Dictionary vs ConcurrentDictionary because of an issue I had with one of the changes I made to a Dictionary.
I had this Dictionary
private static Dictionary<string, Side> _strategySides = null;
In the constructor, I am adding some keys and values to the Dictionary I created like this
_strategySides.Add("Combination", Side.Combo);
_strategySides.Add("Collar", Side.Collar);
This code was fine and had been running in all environments for a while now. When I added
_strategySides.Add("Diagonal", Side.Diagonal);
This code starts to break with exceptions “Index was outside the bounds of the array.” On the dictionary. Then I got into the concept of ConcurrentDictionary and its uses and that I needed to choose ConcurrentDictionary over Dictionary in my case since its a multi threaded application.
So my question to all you gurus is that why didn't it throw an exception all these days and it started when I added something to a dictionary. Any knowledge on this will be appreciated.
As you mentioned, you have multi threaded application. Dictionary is not thread-safe and somewhere in your code you are reading dictionary simultaneously when adding item to it -> IndexOutOfboundsException.
This is mentioned in documentation:
A Dictionary can support multiple readers concurrently, as long as the
collection is not modified. Even so, enumerating through a collection
is intrinsically not a thread-safe procedure. In the rare case where
an enumeration contends with write accesses, the collection must be
locked during the entire enumeration. To allow the collection to be
accessed by multiple threads for reading and writing, you must
implement your own synchronization. For a thread-safe alternative, see
ConcurrentDictionary.
Check out the answer to this question: c# Dictionary lookup throws "Index was outside the bounds of the array"
It seems as though receiving this error on a dictionary is specific to a thread safety violation. The linked answer provides 2 ways to deal with the issue, one is concurrentdictionary.
If I had to guess why it didn't happen before: you are adding the entries in your constructor for a static object, which means only 1 writer, and no readers yet.
Your new entry is probably being added outside the constructor? Another thread could be reading while this write is being attempted, and is not allowed.
Dictionary is not thread-safe, and if you modify it while being accessed from multiple threads, all kinds of weird stuff can happen, including appearing to "work"... until it doesn't. Either protect it with a lock, or use the data structure that was specifically designed for multi-threaded use (i.e. ConcurrentDictionary).
So why did it "work" - that's very difficult to know definitively, but my bet would be on either simply not seeing the problem (i.e. the internal dictionary state was corrupted but you didn't notice it due to your usage patterns), or simply being "lucky" on execution timings (e.g. you could have inadvertently "synchronized" the threads through the debugger).
The point is: whatever it was, you cannot rely on it! You have to do the "right thing" even if the "wrong thing" appears to "work". That is the nature of multi-threaded programming.

How should I share a large read-only List<T> with each Task.Factory.StartNew() method

Consider that I have a custom class called Terms and that class contains a number of strings properties. Then I create a fairly large (say 50,000) List<Terms> object. This List<Terms> only needs to be read from but it needs to be read from by multiple instances of Task.Factory.StartNew (the number of instances could vary from 1 to 100s).
How would I best pass that list into the long running task? Memory isn't too much of a concern as this is a custom application for a specific use on a specific server with plenty of memory. Should I reference it or should I just pass it off as a normal argument into the method doing the work?
Since you're passing a reference it doesn't really matter how you pass it, it won't copy the list itself. As Ket Smith said, I would pass it as a parameter to the method you are executing.
The issue is List<T> is not entirely thread-safe. Reads by multiple threads are safe but a write can cause some issues:
It is safe to perform multiple read operations on a List, but issues can occur if the collection is modified while it’s being read. To ensure thread safety, lock the collection during a read or write operation.
From List<T>
You say your list is read-only so that may be a non-issue, but a single unpredictable change could lead to unexpected behavior and so it's bug-prone.
I recommend using ImmutableList<T> which is inherently thread-safe since it's immutable.
So long as you don't try to copy it into each separate task, it shouldn't make much difference: more a matter of coding style than anything else. Each task will still be working with the same list in memory: just a different reference to the same underlying list.
That said, sheerly as a matter of coding style and maintainability, I'd probably try to pass it in as a parameter to whatever method you're executing in your Task.Factory.StartNew() (or better yet, Task.Run() - see here). That way, you've clearly called out your task's dependencies, and if you decide that you need to get the list from some other place, it's more clear what you've got to change. (But you could probably find 20 places in my own code where I haven't followed that rule: sometimes I go with what's easier for me now than with what's likely to be easier for the me six months from now.)

Multiple writers, one reader, which collection

My situation is this:
Multiple threads must write concurrently to the same collection (add and addrange). Order of items is not an issue.
When all threads have completed (join) and im back on my main thread, then I need to read all the collected data fast in a foreach style, where no actual locking is needed since all threads are done.
In the "old days" I would probably use a readerwriter lock for this on a List, but with the new concurrent collections I wonder if not there is a better alternative. I just can't figure out which as most concurrent collections seem to assume that the reader is also on a concurrent thread.
I don't believe you want to use any of the collections in System.Collections.Concurrent. These generally have extra overhead to allow for concurrent reading.
Unless you have a lot of contention, you are probably better taking a lock on a simple List<T> and adding to it. You will have a small amount of overhead as the List resizes, but it will be fairly infrequent.
However, what I would probably do in this case is simply adding to a List<T> per thread rather than a shared one, and either merge them at the end of processing, or simply iterate over all elements in each of the collections.
You could possibly use a ConcurrentBag and then call .ToArray() or GetEnumerator() on it when ready to read (bypassing a per-read penalty), but you will may find that the speed of insertions is a bit slower than your manual write lock on a simple List. It really depends on amount of contention. The ConcurrentBag is pretty good about partitioning but as you noted, is geared to concurrent reads and writes.
As always, benchmark your particular situation! Multithreading performance is highly dependent on many things in actual usage, and things like type of data, number of insertions, and such will change the results dramatically - a handful of reality is worth a gallon of theory.
Order of items is not an issue. When all threads have completed (join) and im back on my main thread, then I need to read all the collected data
You have not stated a requirement for a thread-safe collection at all. There's no point in sharing a single collection since you never read at the same time you write. Nor does it matter that all writing happens to the same collection since order doesn't matter. Nor should it matter since order would be random anyway.
So just give each thread its own collection to fill, no locking required. And iterate them one by one afterwards, no locking required.
Try the System.Collections.Concurrent.ConcurrentBag.
From the collection's description:
Represents a thread-safe, unordered collection of objects.
I believe this meets your criteria of handling multiple threads and order of items not being important, and later when you are back in the main thread, you can quickly foreach iterate over the collection and act on each item.

C# lock with LINQ query

Is it necessary to lock LINQ statements as follows? If omitting the lock, any exceptions will be countered when multiple threads execute it concurrently?
lock (syncKey)
{
return (from keyValue in dictionary
where keyValue.Key > versionNumber
select keyValue.Value).ToList();
}
PS: Writer threads do exist to mutate the dictionary.
Most types are thread-safe to read, but not thread-safe during mutation.
If none of the threads is changing the dictionary, then you don't need to do anything - just read away.
If, however, one of the threads is changing it then you have problems and need to synchronize. The simplest approach is a lock, however this prevents concurrent readers even when there is no writer. If there is a good chance you will have more readers that writers, consider using a ReaderWriterLockSlim to synchronize - this will allow any number of readers (with no writer), or: one writer.
In 4.0 you might also consider a ConcurrentDictionary<,>
So long as the query has no side-effects (such as any of the expressions calling code that make changes) there there is no need to lock a LINQ statement.
Basically, if you don't modify the data (and nothing else is modifying the data you are using) then you don't need locks.
If you are using .NET 4.0 and there is a ConcurrentDictionary that is thread safe. Here is an example of using a concurrent dictionary (admittedly not in a LINQ statement)
UPDATE
If you are modifying data then you need to use locks. If two or more threads attempt to access a locked section of code there will be a small performance loss as one or more of the threads waits for the lock to be released. NOTE: If you over-lock then you may end up with worse performance that you would if you had just built the code using a sequential algorithm from the start.
If you are only ever reading data then you don't need locks as there is no mutable shared state to protect.
If you do not use locks then you may end up with intermittent bugs where the data is not quite right or exceptions are thrown when collisions occur between readers and writers. In my experience, most of the time you may never get an exception, you just get corrupt data (except you don't necessarily know it is corrupt). Here is another example showing how data can be corrupted if you don't use locks or redesign your algorithm to cope.
You often get the best out of a system if you consider the constraints of developing in a parallel system from the outset. Sometimes you can re-write your code so it uses no shared data. Sometime you can split the data up into chunks and have each thread/task work on its own chunk then have some process at the end stitch it all back together again.
If your dictionary is static and a method where you run the query is not (or another concurrent access scenarios), and dictionary can be modified from another thread, then yes, lock is required otherwise - is not.
Yes, you need to lock your shared resources when using LINQ in multi-threaded scenarios (EDIT: of course, if your source collection is being modified as Marc said, if you are only reading it, you don't need to worry about it). If you are using .Net 4 or the parallel extensions for 3.5 you could look at replacing your Dictionary with a ConcurrentDictionary (or use some other custom implementation anyway).

Multithreaded access to memory

Good morning,
Say I have some 6 different threads, and I want to share the same data with each of them at the same time. Can I make a class variable with the data I want to share and make each thread access that memory concurrently without performance downgrade, or is it preferable to pass a true copy of the data to each thread?
Thank you very much.
It depends entirely on the data;
if the data is immutable (or mutable but you don't actually mutate it), then chuck all the threads at it - great
if you need to mutate it, but no two threads will ever depend on the data mutated by another - great
if you need to mutate it, and there are conflicts but you can sensibly synchronize access to the data such that there is no risk of two threads deadlocking etc - great, but not always trivial
if it is not safe to make any assumptions, then a true clone of the data is the safest approach, but has the most overhead in terms of data duplication; if the data is cheap to copy, this may be fine - and indeed may outperform synchronization
if the threads do co-depend on each other, then you have no option other than to figure out some kind of sensibly locking strategy; again - to stress: deadlocks are a problem here - some ideas:
always provide a timeout when obtaining a lock
if you need to lock two items, it may help to try locking both eagerly (rather than locking one at the start, and the other after you've done lots of changes) - then you can simply release and re-take the locks, without having to either undo changes, or put the changes back into a particular state

Categories