Is MemoryCache.Set() thread-safe? - c#

The MSDN documentation for MemoryCache.Set unfortunately doesn’t state explicitly whether it is thread-safe or not.
Is it safe to use .Get() and .Set() from several threads without an explicit lock?

Yes, the MemoryCache class is thread safe:
System.Runtime.Caching.MemoryCache is threadsafe. Multiple concurrent
threads can read and write a MemoryCache instance. Internally
thread-safety is automatically handled to ensure the cache is updated
in a consistent manner.
What this might be referring to is that data stored within the cache
may itself not be threadsafe. For example if a List is placed in
the cache, and two separate threads both get a reference to the cached
List, the two threads will end up stepping on each other if they
both attempt to update the list simultaneously.
This being said the Get and Set methods are thread safe but if the data structure you might be storing into this cache is not thread safe you might get into trouble. Imagine for example that you stored a dictionary inside this cache. Then while thread1 uses Get to fetch the dictionary and starts reading from it, thread2 uses Get to fetch this same dictionary and tries to write to it. While the Get operation will be thread safe what will happen next could be pretty nasty.

The documentation for MemoryCache states:
This type is thread safe.

Related

How do bulk actions affect a ConcurrentDictionary?

I'm confused about how Concurrent Dictionaries lock their resources.
For example, if I run a method that iterates over every item in the Dictionary and edits its value in a thread and I try to read the value of a key from another thread:
Will the second thread be accessing a snapshot of the Dictionary?
If not, will it access the updated entry if that one has already been updated?
Concurrent Dictionaries are thread-safe that can be accessed by multiple threads concurrently. Read operations on the dictionary are performed in a lock-free manner whereas write is protected with lock. For implementation details, please check ConcurrentDictionary.
The second thread will be accessing the updated dictionary. All a ConcurrentDictionary does is provide put a lock on the collection on any add or remove operations. Read operations are unaffected.
Source

C# Threading - Using a class in a thread-safe way vs. implementing it as thread-safe

Suppose I want to use a non thread-safe class from the .Net Framework (the documentation states that it is not thread-safe). Sometimes I change the value of Property X from one thread, and sometimes from another thread, but I never access it from two threads at the same time. And sometimes I call Method Y from one thread, and sometimes from another thread, but never at the same time.
Is this means that I use the class in a thread-safe way, and the fact that the documentation state that it's not thread-safe
is no longer relevant to my situation?
If the answer is No: Can I do everything related to a specific object in the same thread - i.e, creating it and calling its members always in the same thread (but not the GUI thread)? If so, how do I do that? (If relevant, it's a WPF app).
No, it is not thread safe. As a general rule, you should never write multi threaded code without some kind of synchronization. In your first example, even if you somehow manage to ensure that modifying/reading is never done at the same time, still there is a problem of caching values and instructions reordering.
Just for example, CPU caches values into a register, you update it on one thread, read it from another. If the second one has it cached, it doesn't go to RAM to fetch it and doesn't see the updated value.
Take a look at this great post for more info and problems with writing lock free multi threaded code link. It has a great explanation how CPU, compiler and CLI byte code compiler can reorder instructions.
Suppose I want to use a non thread-safe class from the .Net Framework (the documentation states that it is not thread-safe).
"Thread-safe" has a number of different meanings. Most objects fall into one of three categories:
Thread-affine. These objects can only be accessed from a single thread, never from another thread. Most UI components fall into this category.
Thread-safe. These objects can be accessed from any thread at any time. Most synchronization objects (including concurrent collections) fall into this category.
One-at-a-time. These objects can be accessed from one thread at a time. This is the "default" category, with most .NET types falling into this category.
Sometimes I change the value of Property X from one thread, and sometimes from another thread, but I never access it from two threads at the same time. And sometimes I call Method Y from one thread, and sometimes from another thread, but never at the same time.
As another answerer noted, you have to take into consideration instruction reordering and cached reads. In other words, it's not sufficient to just do these at different times; you'll need to implement proper barriers to ensure it is guaranteed to work correctly.
The easiest way to do this is to protect all access of the object with a lock statement. If all reads, writes, and method calls are all within the same lock, then this would work (assuming the object does have a one-at-a-time kind of threading model and not thread-affine).
Suppose I want to use a non thread-safe class from the .Net Framework (the documentation states that it is not thread-safe). Sometimes I change the value of Property X from one thread, and sometimes from another thread, but I never access it from two threads at the same time. And sometimes I call Method Y from one thread, and sometimes from another thread, but never at the same time.
All Classes are by default non thread safe, except few Collections like Concurrent Collections designed specifically for the thread safety. So for any other class that you may choose and if you access it via multiple threads or in a Non atomic manner, whether read / write then it's imperative to introduce thread safety while changing the state of an object. This only applies to the objects whose state can be modified in a multi-threaded environment but Methods as such are just functional implementation, they are themselves not a state, which can be modified, they just introduce thread safety for maintaining the object state.
Is this means that I use the class in a thread-safe way, and the fact that the documentation state that it's not thread-safe is no longer relevant to my situation? If the answer is No: Can I do everything related to a class in the same thread (but not the GUI thread)? If so, how do I do that? (If relevant, it's a WPF app).
For a Ui application, consider introducing Async-Await for IO based operations, like file read, database read and use TPL for compute bound operations. Benefit of Async-Await is that:
It doesn't block the Ui thread at all, and keeps Ui completely responsive, in fact post await Ui controls can be directly updated with no Cross thread concern, since only one thread is involved
The TPL concurrency too makes compute operations blocking, they summon the threads from the thread Pool and can't be used for the Ui update due to Cross thread concern
And last: there are classes in which one method starts an operation, and another one ends it. For example, using the SpeechRecognitionEngine class you can start a speech recognition session with RecognizeAsync (this method was before the TPL library so it does not return a Task), and then cancel the recognition session with RecognizeAsyncCancel. What if I call RecognizeAsync from one thread and RecognizeAsyncCancel from another one? (It works, but is it "safe"? Will it fail on some conditions which I'm not aware of?)
As you have mentioned the Async method, this might be an older implementation, based on APM, which needs AsyncCallBack to coordinate, something on the lines of BeginXX, EndXX, if that's the case, then nothing much would be required to co-ordinate, as they use AsyncCallBack to execute a callback delegate. In fact as mentioned earlier, there's no extra thread involved here, whether its old version or new Async-Await. Regarding task cancellation, CancellationTokenSource can be used for the Async-Await, a separate cancellation task is not required. Between multiple threads coordination can be done via Auto / Manual ResetEvent.
If the calls mentioned above are synchronous, then use the Task wrapper to return the Task can call them via Async method as follows:
await Task.Run(() => RecognizeAsync())
Though its a sort of Anti-Pattern, but can be useful in making whole call chain Async
Edits (to answer OP questions)
Thanks for your detailed answer, but I didn't understand some of it. At the first point you are saying that "it's imperative to introduce thread safety", but how?
Thread safety is introduced using synchronization constructs like lock, mutex, semaphore, monitor, Interlocked, all of them serve the purpose of saving an object from getting corrupt / race condition. I don't see any steps.
Does the steps I have taken, as described in my post, are enough?
I don't see any thread safety steps in your post, please highlight which steps you are talking about
At the second point I'm asking how to use an object in the same thread all the time (whenever I use it). Async-Await has nothing to do with this, AFAIK.
Async-Await is the only mechanism in concurrency, which since doesn't involved any extra thread beside calling thread, can ensure everything always runs on same thread, since it use the IO completion ports (hardware based concurrency), otherwise if you use Task Parallel library, then there's no way for you to ensure that same / given thread is always use, as that's a very high level abstraction
Check one of my recent detailed answer on threading here, it may help in providing some more detailed aspects
It is not thread-safe, as the technical risk exists, but your policy is designed to cope with the problem and work around the risk. So, if things stand as you described, then you are not having a thread-safe environment, however, you are safe. For now.

Multiple threads to write to Array

So its in C#, and basically I'll have a position in an array for each thread to store some data. Would I Need to lock this? For example:
int[] threads = new int[12];
Each thread would access a specific position in the array for example, thread 1 will update the value in threads[0], thread 2 threads[1], etc.
Idea is to have the console print the values that are stored in the array.
Alright got so many comments. I thought I would clarify exactly what I am doing in the hopes I will learn even more. So basically the gist of it is:
The main thread kicks off 12 separate threads, each thread calls a function in the main thread to get a bunch of records from the database. The access to that method is locked but it returns about a 100 records for the thread to process by itself.
As the thread is processing the records it makes a couple of web requests and inserts into the database. Once the thread has finished processing its batch of records it calls a function again from the main thread and that function kicks off a new thread in lieu of the last one being finished.
As the threads are doing their processing I would like to output their progress in the console. Initially I locked each console output because if the same function gets called simultaneously the cursor position for each output would go all over the place. So I was thinking I would have an array that stored the counts for each value and then have a function just print it all out. Although I am starting to wonder if that would really be any different than what I currently am doing.
If each thread is accessing a value at its own index, then you should be fine as you don't have to worry about access from multiple threads simultaneously.
I believe that if each thread only works on a separate part of the array, all will be well. If you're going to share data (i. e. communicate it between threads) then you'll need some sort of memory barrier to avoid memory model issues.
I believe that if you spawn a bunch of threads, each of which populates its own section of the array, then wait for all of those threads to finish using Thread.Join, that that will do enough in terms of barriers for you to be safe.
MSDN documentation on Arrays says:
Public static (Shared in Visual Basic)
members of this type are thread safe.
Any instance members are not
guaranteed to be thread safe.
This implementation does not provide a
synchronized (thread safe) wrapper for
an Array; however, .NET Framework
classes based on Array provide their
own synchronized version of the
collection using the SyncRoot
property.
Enumerating through a collection is
intrinsically not a thread-safe
procedure. Even when a collection is
synchronized, other threads can still
modify the collection, which causes
the enumerator to throw an exception.
To guarantee thread safety during
enumeration, you can either lock the
collection during the entire
enumeration or catch the exceptions
resulting from changes made by other
threads.
So no, they're not thread safe. Generally a collection is said to be 'not threadsafe' when concurrent accesses could fail internally, but as each thread will access different possitions, there is no concurrent access here.

Multi-threading concept and lock in c#

I read about lock, though not understood nothing at all.
My question is why do we use a un-used object and lock that and how this makes something thread-safe or how this helps in multi-threading ? Isn't there other way to make thread-safe code.
public class test {
private object Lock { get; set; }
...
lock (this.Lock) { ... }
...
}
Sorry is my question is very stupid, but i don't understand, although i've used it many times.
Accessing a piece of data from one thread while other thread is modifying it is called "data race condition" (or just "data race") and can lead to corruption of data. (*)
Locks are simply a mechanism for avoiding data races. If two (or more) concurrent threads lock the same lock object, then they are no longer concurrent and can no longer cause data races, for the duration of the lock. Essentially, we are serializing the access to shared data.
The trick is to keep your locks as "wide" as you must to avoid data races, yet as "narrow" as you can to gain performance through concurrent execution. This is a fine balance that can easily go out of whack in either direction, which is why multi-threaded programming is hard.
Some guidelines:
As long all threads are just reading the data and none will ever modify it, lock is unnecessary.
Conversely, if at least one thread might at some point modify the data, then all concurrent code paths accessing that same data must be properly serialized through locks, even those that only read the data.
Using a lock in one code path but not the other will leave the data wide open to race conditions.
Also, using one lock object in one code path, but a different lock object in another (concurrent) code path does not serialize these code paths and leaves you wide open to data races.
On the other hand, if two concurrent code paths access different data, they can use different lock objects. But, whenever there is more than one lock object, watch out for deadlocks. A deadlock is often also a "code race condition" (and a heisenbug, see below).
The lock object does not need to be (and usually isn't) the same thing as the data you are trying to protect. Unfortunately, there is no language facility that lets you "declare" which data is protected by which lock object, so you'll have to very carefully document your "locking convention" both for other people that might maintain your code, and for yourself (since even after a short time you will forget some of the nooks and crannies of your locking convention).
It's usually a good idea to protect the lock object from the outside world as much as you can. After all, you are using it for the very sensitive task of locking and you don't want it locked by external actors in unforeseen ways. That's why using this or a public field as a lock object is usually a bad idea.
The lock keyword is simply a more convenient syntax for Monitor.Enter and Monitor.Exit.
The lock object can be any object in .NET, but value objects will be boxed in the call to Monitor.Enter, which means threads will not share the same lock object, leaving the data unprotected. Therefore, only use reference types as lock objects.
For inter-process communication you can use a global mutex, which can be created by passing a non-empty name to Mutex Constructor. Global mutexes provide essentially the same functionality as regular "local" locking, except they can be shared between separate processes.
There are synchronization mechanisms other than locks, such as semaphores, condition variables, message queues or atomic operations. Be careful when mixing different synchronization mechanisms.
Locks also behave as memory barriers, which is increasingly important on modern multi-core, multi-cache CPUs. This is part of the reason why you need locks on reading the data and not just writing.
(*) It is called "race" because concurrent threads are "racing" towards performing an operation on the shared data and whoever wins that race determines the outcome of the operation. So the outcome depends on timing of the execution, which is essentially random on modern preemptive multitasking OSes. Worse yet, timing is easily modified by a simple act of observing the program execution through tools such as debugger, which makes them "heisenbugs" (i.e. the phenomenon being observed is changed by the mere act of observation).
Lock object is like a door into the single room where only one guest per time can enter.
The room can be your data, the guest can be your function.
define data (room)
add door (lock object)
invite guests (functions)
using lock insctruction close/open door to allow only one guest per time enter into the room.
Why we need this? If you simulatniously write a data in a file (just an example, can be 1000s others) you will need to sync an access of your funcitons (close/open door for guests) to the write file, so any function will append to the end of the file (assuming that is requierement of this example)
This is naturally not only way sync the threads, there are more out there:
Monitors
Wait hadlers
...
Check out the link for complete information and description of each of them
Thread Synchronization
Yes, there is indeed another way:
using System.Runtime.CompilerServices;
class Test
{
private object Lock { get; set; }
[MethodImpl(MethodImplOptions.Synchronized)]
public void Foo()
{
// Now this instance is locked
}
}
While it looks more "natural", it's not used often, because of the fact that the object is locking on itself this way, so other code could not risk locking on this object -- it could cause a deadlock.
Because of this, you usually create a (lazy-initialized) private field referring to an object, and use that object as a lock instead. This will guarantee that no one else can lock against the same object as you.
A little more detail on what's happening beneath the hood:
When you "lock on an object", you're not locking on the object itself. Rather, you're using the object as a guaranteed-to-be-unique-address-in-memory throughout your program. When you "lock", the runtime takes the object's address, uses it to look up the actual lock inside another table (which is hidden from you), and uses that object as the ""lock" (also known as a "critical section").
So really, for you, an object is just a proxy/symbol -- it isn't doing anything by itself; it's just acting as a unique indicator that will never clash with another valid object in the same program.
When you have different threads accessing same variable/resource at the same time they may over write on this variable/resource and you can have unexpected results. Lock will make sure only one thread can assess variable at on time and remain thread will queue to get access to this variable/resource till lock is released
suppose we have balance variable of an account.
Two different thread read its value which was 100
Suppose first thread adds 50 to it like 100 + 50 and saves it and balance will have 150
As second thread already read 100 and mean while. suppose it subtract 50 like 100-50 but point to note here is that first thread has made the balance 150 so second thread should to 150-50 this could cause serious problems.
So lock makes sure that when on thread wants to change some resource states it locks it and leaves after committing change
The lock statement introduces the concept of mutual exclusion. Only one thread can acquire a lock on a given object at any one time. This prevents threads from accessing shared data structures concurrently, thus corrupting them.
If other threads already hold a lock, the lock statement will block until it is able to acquire an exclusive lock on its argument before allowing its block to execute.
Note that the only thing lock does is control entry to the block of code. Access to members of the class is completely unrelated to the lock. It is up to the class itself to ensure that accesses that must be synchronized are coordinated by the use of lock or other synchronization primitives. Also note that access to some or all members may not have to be synchronized. For instance, if you want to maintain a counter, you could use the Interlocked class without locking.
An alternative to locking is lock-free data structures, which behave correctly in the presence of multiple threads. Operations on lock-free data structures must be designed very carefully, usually with the assistance of lock-free primitives such as compare-and-swap (CAS).
The general theme of such techniques is to try to perform operations on data structures atomically and detect when operations fail due to concurrent actions by other threads, followed by retries. This works well on a lightly loaded system where failures are unlikely, but can produce runaway behaviour as the failure rate climbs and retries become a dominant load. This problem can be ameliorated by backing off the retry rate, effectively throttling the load.
A more sophisticated alternative is software transactional memory. Unlike CAS, STM generalizes the concept of fail-and-retry to arbitrarily complex memory operations. In simple terms, you start a transaction, perform all your operations, and finally commit. The system detects if the operations cannot succeed due to conflicting operations performed by other threads that beat the current thread to the punch. In such cases, STM can either fail outright, requiring the application to take corrective action, or, in more sophisticated implementations, it can automatically go back to the start of the transaction and try again.
Your confusion is pretty typical for those just getting familiar with the lock keyword in C#. You are right, the object used in the lock statement is really nothing more than a token that defines a critical section. That object, in no way, has any protection from multithreaded access itself.
The way this works is that the CLR reserves a 4 byte (32-bit systems) section in the object header (type handle) called the sync block. The sync block is nothing more than an index into an array that stores the actual critical section information. When you use the lock keyword the CLR will modify this sync block value accordingly.
There are advantages and disadvantages to this scheme. The advantage is that it made for a fairly elegant solution to defining critical sections. One obvious disadvantage is that each object instance contains the sync block and most instances never use it so it would seem to be a waste of space in most cases. Another disadvantage is that boxed value types can be used which is almost always wrong and certainly leads to confusion.
I remember way back when .NET was first released that there was a lot of chatter over whether the lock keyword was good or bad for the language. The general consensus (at least as I remember it) was that it was bad because the using keyword could have been easily used instead. In fact, a solution that used the using keyword actually would have made more sense because it could have been done without the need for the sync block. The c# design team even went on record to say that had they been given a second chance the lock keyword never would have made it into the language.1
1The only reference I could find for this is on Jon Skeet's website here.

is locking necessary for Dictionary lookup?

lock(dictionaryX)
{
dictionaryX.TryGetValue(key, out value);
}
is locking necessary while doing lookups to a Dictionary ?
THe program is multithreaded, and while adding key/value to dict. dict is being locked.
As mentioned here:
Using TryGetValue() without locking is not safe. The dictionary is temporarily in a state that makes it unsuitable for reading while another thread is writing the dictionary. A dictionary will reorganize itself from time to time as the number of entries it contains grows. When you read at the exact time this re-organization takes place, you'll run the risk of finding the wrong value for the key when the buckets got updated but not yet the value entries.
UPDATE:
take a look at "Thread Safety" part of this page too.
As with many subtle questions in programming, the answer is: Not necessarily.
If you only add values as an initialization, then the subsequent reading does not need to be synchronized. But, on the other hand, if you're going to be reading and writing at all times, then absolutely you need to protect that resource.
However, a full-blown lock may not be the best way, depending on the amount of traffic your Dictionary gets. Try a ReaderWriterLockSlim if you are using .NET 3.5 or greater.
If you have multiple threads accessing the Dictionary, then you do need to lock on updates and on lookup. The reason you need to lock on lookup is that there could be an update taking place at the same time you're doing the lookup, and the Dictionary could be in an inconsistent state during the update. For example, imagine that your have one thread doing this:
if (myDictionary.TryGetValue(key, out value))
{
}
and a separate thread is doing this:
myDictionary.Remove(key);
What could happen is that the thread doing the TryGetValue determines that the item is in the dictionary, but before it can retrieve the item, the other thread removes it. The result would be that the thread doing the lookup would either throw an exception or TryGetValue would return true but the value would be null or possibly an object that does not match the key.
That's only one thing that can happen. Something similarly disastrous can happen if you're doing a lookup on one thread and another thread does an add of the value that you're trying to look up.
Locking is only needed when you are synchronizing access to a resource between threads. As long as there are not mulitple threads involved then locking is not needed here.
In the context of updating and reading the value from multiple threads, yes a lock is absolutely necessary. In fact if you're using 4.0 you should consider switching to one of the collections specifically designed for concurrent access.
Use the new ConcurrentDictionary<TKey, TValue> object and you can forget about having to do any locks.
Yes you need to lock the dictionary for access in a multithreaded environment. The writing to a dictionary is not atomic, so it could add the key, but not the value. In that case when you access it, you could get an exception.
If you are on .Net 4, you can replace with ConcurrentDictionary to do this safely. There are other similar collections, preferred when you need multithreaded access, in the System.Collection.Concurrent namespace.
Don't use roll-your-own locking if this is an option for you.
Yes, you should lock if this dictionary is a shared resource between more than one thread. This ensures that you get the correct value and other thread doesn't happen to change value mid-way during your Lookup call.
Yes, you have to lock if you have multithreaded updates on that dictionary. Check this great post for details: “Thread safe” Dictionary(TKey,TValue)
But since ConcurrentDictionary<> introduced you can use it either via .NET 4 or by using Rx in 3.5 (it contains System.Threading.dll with implementation for new thread-safe collections)

Categories