I am trying to understand when to use Dictionary vs ConcurrentDictionary because of an issue I had with one of the changes I made to a Dictionary.
I had this Dictionary
private static Dictionary<string, Side> _strategySides = null;
In the constructor, I am adding some keys and values to the Dictionary I created like this
_strategySides.Add("Combination", Side.Combo);
_strategySides.Add("Collar", Side.Collar);
This code was fine and had been running in all environments for a while now. When I added
_strategySides.Add("Diagonal", Side.Diagonal);
This code starts to break with exceptions “Index was outside the bounds of the array.” On the dictionary. Then I got into the concept of ConcurrentDictionary and its uses and that I needed to choose ConcurrentDictionary over Dictionary in my case since its a multi threaded application.
So my question to all you gurus is that why didn't it throw an exception all these days and it started when I added something to a dictionary. Any knowledge on this will be appreciated.
As you mentioned, you have multi threaded application. Dictionary is not thread-safe and somewhere in your code you are reading dictionary simultaneously when adding item to it -> IndexOutOfboundsException.
This is mentioned in documentation:
A Dictionary can support multiple readers concurrently, as long as the
collection is not modified. Even so, enumerating through a collection
is intrinsically not a thread-safe procedure. In the rare case where
an enumeration contends with write accesses, the collection must be
locked during the entire enumeration. To allow the collection to be
accessed by multiple threads for reading and writing, you must
implement your own synchronization. For a thread-safe alternative, see
ConcurrentDictionary.
Check out the answer to this question: c# Dictionary lookup throws "Index was outside the bounds of the array"
It seems as though receiving this error on a dictionary is specific to a thread safety violation. The linked answer provides 2 ways to deal with the issue, one is concurrentdictionary.
If I had to guess why it didn't happen before: you are adding the entries in your constructor for a static object, which means only 1 writer, and no readers yet.
Your new entry is probably being added outside the constructor? Another thread could be reading while this write is being attempted, and is not allowed.
Dictionary is not thread-safe, and if you modify it while being accessed from multiple threads, all kinds of weird stuff can happen, including appearing to "work"... until it doesn't. Either protect it with a lock, or use the data structure that was specifically designed for multi-threaded use (i.e. ConcurrentDictionary).
So why did it "work" - that's very difficult to know definitively, but my bet would be on either simply not seeing the problem (i.e. the internal dictionary state was corrupted but you didn't notice it due to your usage patterns), or simply being "lucky" on execution timings (e.g. you could have inadvertently "synchronized" the threads through the debugger).
The point is: whatever it was, you cannot rely on it! You have to do the "right thing" even if the "wrong thing" appears to "work". That is the nature of multi-threaded programming.
Related
I am working on a multi-thread application, where I load data from external feeds and store them in internal collections.
These collections are updated once per X minutes, by loading all data from the external feeds again.
There is no other adding/removing from these collection, just reading.
Normally I would use locking during the updating, same as everywhere I am accessing the collections.
Question:
Do the concurrent collections make my life easier in this case?
Basically I see two approaches
Load the data from external feed and then remove the items which are not present anymore, add the missing, and update the changed - I guess this is a good solution with help of concurrent collection (no locking required, right?), but it require too much code from my side.
Simply override the old collection object with a new one (e.g. _data = new ConcurentBag(newData). Here I am quite sure that using the concurrent collections have no advantage at all, am I right? Locking mechanism is required.
Is there out of the box solution I can use, using the concurrent collections? I would not like to reinvent the wheel again.
Yes, for concurrent collections the locking mechanism is stored inside the collections, so if you new up a collection in place of the old one, that just defeats the purpose. They are mostly used in producer-consumer situations, usually in combination with a BlockingCollection<T>. If your producer does more than just add data, it makes things a bit more complicated.
The benefit to not using concurrent collections is that your locking mechanism no longer depends on the collection - you can have a separate synchronization object that you lock on, and inside the critical section you're free to assign another instance like you wanted.
To answer your question - I don't know of any out-of-the-box mechanism to do what you want, but I wouldn't call using a simple lock statement "reinventing the wheel". That's a bit like saying that using for loops is reinventing the wheel. Just have a separate synchronization object alongside your non-concurrent collection.
Consider that I have a custom class called Terms and that class contains a number of strings properties. Then I create a fairly large (say 50,000) List<Terms> object. This List<Terms> only needs to be read from but it needs to be read from by multiple instances of Task.Factory.StartNew (the number of instances could vary from 1 to 100s).
How would I best pass that list into the long running task? Memory isn't too much of a concern as this is a custom application for a specific use on a specific server with plenty of memory. Should I reference it or should I just pass it off as a normal argument into the method doing the work?
Since you're passing a reference it doesn't really matter how you pass it, it won't copy the list itself. As Ket Smith said, I would pass it as a parameter to the method you are executing.
The issue is List<T> is not entirely thread-safe. Reads by multiple threads are safe but a write can cause some issues:
It is safe to perform multiple read operations on a List, but issues can occur if the collection is modified while it’s being read. To ensure thread safety, lock the collection during a read or write operation.
From List<T>
You say your list is read-only so that may be a non-issue, but a single unpredictable change could lead to unexpected behavior and so it's bug-prone.
I recommend using ImmutableList<T> which is inherently thread-safe since it's immutable.
So long as you don't try to copy it into each separate task, it shouldn't make much difference: more a matter of coding style than anything else. Each task will still be working with the same list in memory: just a different reference to the same underlying list.
That said, sheerly as a matter of coding style and maintainability, I'd probably try to pass it in as a parameter to whatever method you're executing in your Task.Factory.StartNew() (or better yet, Task.Run() - see here). That way, you've clearly called out your task's dependencies, and if you decide that you need to get the list from some other place, it's more clear what you've got to change. (But you could probably find 20 places in my own code where I haven't followed that rule: sometimes I go with what's easier for me now than with what's likely to be easier for the me six months from now.)
Is it necessary to lock LINQ statements as follows? If omitting the lock, any exceptions will be countered when multiple threads execute it concurrently?
lock (syncKey)
{
return (from keyValue in dictionary
where keyValue.Key > versionNumber
select keyValue.Value).ToList();
}
PS: Writer threads do exist to mutate the dictionary.
Most types are thread-safe to read, but not thread-safe during mutation.
If none of the threads is changing the dictionary, then you don't need to do anything - just read away.
If, however, one of the threads is changing it then you have problems and need to synchronize. The simplest approach is a lock, however this prevents concurrent readers even when there is no writer. If there is a good chance you will have more readers that writers, consider using a ReaderWriterLockSlim to synchronize - this will allow any number of readers (with no writer), or: one writer.
In 4.0 you might also consider a ConcurrentDictionary<,>
So long as the query has no side-effects (such as any of the expressions calling code that make changes) there there is no need to lock a LINQ statement.
Basically, if you don't modify the data (and nothing else is modifying the data you are using) then you don't need locks.
If you are using .NET 4.0 and there is a ConcurrentDictionary that is thread safe. Here is an example of using a concurrent dictionary (admittedly not in a LINQ statement)
UPDATE
If you are modifying data then you need to use locks. If two or more threads attempt to access a locked section of code there will be a small performance loss as one or more of the threads waits for the lock to be released. NOTE: If you over-lock then you may end up with worse performance that you would if you had just built the code using a sequential algorithm from the start.
If you are only ever reading data then you don't need locks as there is no mutable shared state to protect.
If you do not use locks then you may end up with intermittent bugs where the data is not quite right or exceptions are thrown when collisions occur between readers and writers. In my experience, most of the time you may never get an exception, you just get corrupt data (except you don't necessarily know it is corrupt). Here is another example showing how data can be corrupted if you don't use locks or redesign your algorithm to cope.
You often get the best out of a system if you consider the constraints of developing in a parallel system from the outset. Sometimes you can re-write your code so it uses no shared data. Sometime you can split the data up into chunks and have each thread/task work on its own chunk then have some process at the end stitch it all back together again.
If your dictionary is static and a method where you run the query is not (or another concurrent access scenarios), and dictionary can be modified from another thread, then yes, lock is required otherwise - is not.
Yes, you need to lock your shared resources when using LINQ in multi-threaded scenarios (EDIT: of course, if your source collection is being modified as Marc said, if you are only reading it, you don't need to worry about it). If you are using .Net 4 or the parallel extensions for 3.5 you could look at replacing your Dictionary with a ConcurrentDictionary (or use some other custom implementation anyway).
In the current implementation of CPython, there is an object known as the "GIL" or "Global Interpreter Lock". It is essentially a mutex that prevents two Python threads from executing Python code at the same time. This prevents two threads from being able to corrupt the state of the Python interpreter, but also prevents multiple threads from really executing together. Essentially, if I do this:
# Thread A
some_list.append(3)
# Thread B
some_list.append(4)
I can't corrupt the list, because at any given time, only one of those threads are executing, since they must hold the GIL to do so. Now, the items in the list might be added in some indeterminate order, but the point is that the list isn't corrupted, and two things will always get added.
So, now to C#. C# essentially faces the same problem as Python, so, how does C# prevent this? I'd also be interested in hearing Java's story, if anyone knows it.
Clarification: I'm interested in what happens without explicit locking statements, especially to the VM. I am aware that locking primitives exist for both Java & C# - they exist in Python as well: The GIL is not used for multi-threaded code, other than to keep the interpreter sane. I am interested in the direct equivalent of the above, so, in C#, if I can remember enough... :-)
List<String> s;
// Reference to s is shared by two threads, which both execute this:
s.Add("hello");
// State of s?
// State of the VM? (And if sane, how so?)
Here's another example:
class A
{
public String s;
}
// Thread A & B
some_A.s = some_other_value;
// some_A's state must change: how does it change?
// Is the VM still in good shape afterwards?
I'm not looking to write bad C# code, I understand the lock statements. Even in Python, the GIL doesn't give you magic-multi-threaded code: you must still lock shared resources. But the GIL prevents Python's "VM" from being corrupted - it is this behavior that I'm interested in.
Most other languages that support threading don't have an equivalent of the Python GIL; they require you to use mutexes, either implicitly or explicitly.
Using lock, you would do this:
lock(some_list)
{
some_list.Add(3);
}
and in thread 2:
lock(some_list)
{
some_list.Add(4);
}
The lock statement ensures that the object inside the lock statement, some_list in this case, can only be accessed by a single thread at a time. See http://msdn.microsoft.com/en-us/library/c5kehkcz(VS.80).aspx for more information.
C# does not have an equivalent of GIL to Python.
Though they face the same issue, their design goals make them
different.
With GIL, CPython ensures that suche operations as appending a list
from two threads is simple. Which also
means that it would allow only one
thread to run at any time. This
makes lists and dictionaries thread safe. Though this makes the job
simpler and intuitive, it makes it
harder to exploit the multithreading
advantage on multicores.
With no GIL, C# does the opposite. It ensures that the burden of integrity is on the developer of the
program but allows you to take
advantage of running multiple threads
simultaneously.
As per one of the discussion -
The GIL in CPython is purely a design choice of having
a big lock vs a lock per object
and synchronisation to make sure that objects are kept in a coherent state.
This consist of a trade off - Giving up the full power of
multithreading.
It has been that most problems do not suffer from this disadvantage
and there are libraries which help you exclusively solve this issue when
required.
That means for a certain class of problems, the burden to utilize the
multicore is
passed to developer so that rest can enjoy the more simpler, intuitive
approach.
Note: Other implementation like IronPython do not have GIL.
It may be instructive to look at the documentation for the Java equivalent of the class you're discussing:
Note that this implementation is not synchronized. If multiple threads access an ArrayList instance concurrently, and at least one of the threads modifies the list structurally, it must be synchronized externally. (A structural modification is any operation that adds or deletes one or more elements, or explicitly resizes the backing array; merely setting the value of an element is not a structural modification.) This is typically accomplished by synchronizing on some object that naturally encapsulates the list. If no such object exists, the list should be "wrapped" using the Collections.synchronizedList method. This is best done at creation time, to prevent accidental unsynchronized access to the list:
List list = Collections.synchronizedList(new ArrayList(...));
The iterators returned by this class's iterator and listIterator methods are fail-fast: if the list is structurally modified at any time after the iterator is created, in any way except through the iterator's own remove or add methods, the iterator will throw a ConcurrentModificationException. Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future.
Note that the fail-fast behavior of an iterator cannot be guaranteed as it is, generally speaking, impossible to make any hard guarantees in the presence of unsynchronized concurrent modification. Fail-fast iterators throw ConcurrentModificationException on a best-effort basis. Therefore, it would be wrong to write a program that depended on this exception for its correctness: the fail-fast behavior of iterators should be used only to detect bugs.
Most complex datastructures(for example lists) can be corrupted when used without locking in multiple threads.
Since changes of references are atomic, a reference always stays a valid reference.
But there is a problem when interacting with security critical code. So any datastructures used by critical code most be one of the following:
Inaccessible from untrusted code, and locked/used correctly by trusted code
Immutable (String class)
Copied before use (valuetype parameters)
Written in trusted code and uses internal locking to guarantee a safe state
For example critical code cannot trust a list accessible from untrusted code. If it gets passed in a List, it has to create a private copy, do it's precondition checks on the copy, and then operate on the copy.
I'm going to take a wild guess at what the question really means...
In Python data structures in the interpreter get corrupted because Python is using a form of reference counting.
Both C# and Java use garbage collection and in fact they do use a global lock when doing a full heap collection.
Data can be marked and moved between "generations" without a lock. But to actually clean it up everything must come to a stop. Hopefully a very short stop, but a full stop.
Here is an interesting link on CLR garbage collection as of 2007:
http://vineetgupta.spaces.live.com/blog/cns!8DE4BDC896BEE1AD!1104.entry
From the MSDN documentation:
"Synchronized supports multiple writing threads, provided that no threads are reading the Hashtable. The synchronized wrapper does not provide thread-safe access in the case of one or more readers and one or more writers."
Source:
http://msdn.microsoft.com/en-us/library/system.collections.hashtable.synchronized.aspx
It sounds like I still have to use locks anyways, so my question is why would we use Hashtable.Synchronized at all?
For the same reason there are different levels of DB transaction. You may care that writes are guaranteed, but not mind reading stale/possibly bad data.
EDIT I note that their specific example is an Enumerator. They can't handle this case in their wrapper, because if you break from the enumeration early, the wrapper class would have no way to know that it can release its lock.
Think instead of the case of a counter. Multiple threads can increase a value in the table, and you want to display the value of the count. It doesn't matter if you display 1,200,453 and the count is actually 1,200,454 - you just need it close. However, you don't want the data to be corrupt. This is a case where thread-safety is important for writes, but not reads.
For the case where you can guarantee that no reader will access the data structure when writing to it (or when you don't care reading wrong data). For example, where the structure is not continually being modified, but a one time calculation that you'll later have to access, although huge enough to warrant many threads writing to it.
you would need it when you are for-eaching over a hashtable on one thread (reads) and there exists other threads that may add/remove items to/from it (writes) ...