I have a class which loads some data from a server and transforms it. The class contains a method that reloads this data from the server.
I'm not sure if the reload is threadsafe, but I read that i might need to add a volatile keyword or using locks.
public class Tenants : ITenants
{
private readonly string url = "someurl";
private readonly IHttpClientFactory httpClientFactory;
private ConfigParser parser;
public Tenants(IHttpClientFactory httpClientFactory)
{
this.httpClientFactory = httpClientFactory;
}
public async Task Refresh()
{
TConfig data = await ConfigLoader.GetData(httpClientFactory.CreateClient(), url);
parser = new ConfigParser(data);
}
public async Task<TConfig> GetSettings(string name)
{
if (parser == null)
await Refresh();
return parser.GetSettings(name);
}
}
public class ConfigParser
{
private readonly ImmutableDictionary<string, TConfig> configs;
public ConfigParser(TConfig[] configs)
{
this.configs = configs.ToImmutableDictionary(s => s.name, v => v);
}
public TConfig GetSettings(string name)
{
if (!configs.ContainsKey(name))
{
return null;
}
return configs[name];
}
}
The Tenants class will be injected as a singleton intoother classes via DI IOC.
I think that this design makes this threadsafe.
It is fully atomic, and immutable with no exposed members to be changed by any consuming code. (TConfig is also immutable)
I also dont think i need a lock, if 2 threads try to set the reference at the same time, last one wins, which i am happy with.
And i dont know enough to understand if i need volatile. But from what i understood about it, i wont need it, as there is only 1 reference if parser that i care about, and its never exposed outside this class.
But i think some of my statements/assumptions above could be wrong.
EDIT:
From your comments I can deduce that you do not understand the difference between immutable and thread safety.
Immutability means an instance of an object can not be mutated (it's internal or external state can not change).
Thread safe means multiple threads can access the class/method without causing errors like race conditions, deadlocks or unexpected behavior like something which has to be executed only once is executed twice.
Immutable objects are thread safe, but something doesn't have to be immutable to be thread safe.
Your Tenants class is neither immutable nor thread safe because:
It's internal sate can change after instantiation.
It contains unexpected behavior where the request to receive the config is executed twice, where it should only happen once.
If you read my answer below you can determine that if you are ok with the request happening twice (which you shouldn't be): You don't have to do anything, but you could add the volatile keyword to the parser field to prevent SOME scenarios, but definitely not all.
You don't see any locks in immutable objects because there's no writing happening to the state of the object.
When there are writing operations in an object it is not immutable anymore (like your Tenants class). To make an object like that thread safe, you need to lock the write operations that can cause errors like the unexpected behavior of something which has to be executed only once is executed twice.
ConfigParser Seems to be thread safe, Tenants however definitely isn't.
Your Tenants class is also not immutable, since it exposes a method which changes the state of the class (both the GetSettings and Refresh methods).
If 2 threads call GetSettings at the same time when parser is null, 2 requests will be made to receive the ConfigParser. You can be OK with this, but it is bad practice, and also means the method is not thread safe.
If you are fine with the request being executed twice you could use volatile here:
The volatile keyword indicates that a field might be modified by multiple threads that are executing at the same time. The compiler, the runtime system, and even hardware may rearrange reads and writes to memory locations for performance reasons. Fields that are declared volatile are not subject to these optimizations. Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed.
Volatile will prevent threads from having outdated values. This means you could prevent some of the extra requests happening (from the threads which still think parser is null), but it will not completely prevent an method or instruction from being executed multiple times at the same time.
In this situation you need to lock:
The lock statement acquires the mutual-exclusion lock for a given object, executes a statement block, and then releases the lock. While a lock is held, the thread that holds the lock can again acquire and release the lock. Any other thread is blocked from acquiring the lock and waits until the lock is released.
Meaning you can prevent multiple threads from executing an method or instruction multiple times at the same time.
Unfortunately, you can't use await inside a lock.
What you want to do is:
If Refresh needs to be called:
If another thread is already working on the Refresh
Wait for the other thread to finish, and do not call Refresh
Continue with the result from the other thread
if no other thread is already working on the Refresh
Invoke the Refresh method
I have written a library for this called TaskSynchronizer. You can use that to accomplish a true thread safe version of you Tenants class.
Example:
public static TaskSynchronizer Synchronizer = new TaskSynchronizer();
public static async Task DoWork()
{
await Task.Delay(100); // Some heavy work.
Console.WriteLine("Work done!");
}
public static async Task WorkRequested()
{
using (Synchronizer.Acquire(DoWork, out var task)) // Synchronize the call to work.
{
await task;
}
}
static void Main(string[] args)
{
var tasks = new List<Task>();
for (var i = 0; i < 2; i++)
{
tasks.Add(WorkRequested());
}
Task.WaitAll(tasks.ToArray());
}
will output:
Work done!
EG: The async DoWork method has only be invoked once, even tho it has been invoked twice at the same time.
Related
I want to know in case of multi threading how singleton works.
Suppose 2 threads enter the instantiation code as shown in below code , and 1st thread enters the instantiation code it locks that part and proceeds with its operation till that time another thread waits.
So once the first thread completes its operation 2nd thread will enter the instantiation code , now I want to know who takes the responsibility to release the lock since 1st thread has completed its operation and will second thread create new instance or will it share 1st threads instantiation???
Code :
public sealed class Singleton
{
private static Singleton instance = null;
// adding locking object
private static readonly object syncRoot = new object();
private Singleton() { }
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new Singleton();
}
}
}
return instance;
}
}
}
once the first thread completes its operation 2nd thread will enter the instantiation code , now I want to know who takes the responsibility to release the lock since 1st thread has completed its operation
Each thread acquires and releases the lock individually. The first thread acquires the lock; while it has the lock, the second thread cannot acquire it.
Once the first thread has released the lock (which happens when execution leaves the block of code controlled by the lock statement), the second thread can then acquire the lock and execute the code. It will then release the lock again, when it's done with it.
…and will second thread create new instance or will it share 1st threads instantiation???
In this particular implementation, the singleton is initialized only once, even if the second thread initially observes the backing field to be null. By using lock, the code ensures that only one thread will ever actually create the singleton instance.
There are variations in which more than one initialization could occur, but IMHO these are inferior ways to do it.
That said, for that matter, it's my opinion that in the context of .NET, even the double-lock above is needlessly complicated. In most cases, it suffices to use a simple field initializer. In the cases where that's not efficient enough, one can use the Lazy<T> class.
For a much more in-depth discussion, see the related Stack Overflow question Thread Safe C# Singleton Pattern and references provided there.
One more best way to implement this we can create a static constructor it will be good.
What you've used here is called double-check locking, a fairly common serialization pattern for multi-threaded code. It works.
A lock is released automatically once you fall out of the lock scope.
Assuming there is contention, one thread would test->acquire->test->initialize->release, and the next would simply test->acquire->test->release: no double-initialization.
Most code examples I've seen of locking use a pattern like this:
private static int _counter = 0;
private static readonly object _sync = new object();
public void DoWork()
{
int counter;
lock (_sync)
{
counter = _counter++;
}
// etc ...
}
My guess is that Montor.Enter uses some sort of reference pointer to the object that lives in memory to build some internal dictionary of what is locked by what thread. Not sure if this is correct, however.
I'm wondering if there are any ramifications of using a more complex object in the Monitor.Enter parameter. For example, if multiple threads were trying to broadcast to a WebSocket, it would be necessary to either
Queue up the requests and have a single thread be responsible for sending, or
Use locking to prevent multiple threads from sending to the same socket.
Suppose the WebSocket object itself was used for the lock:
public async Task SendMessage(WebSocket socket, ArraySegment<byte> data)
{
lock (socket)
{
if (socket.State == WebSocketState.Open)
{
await socket.SendAsync(
data,
WebSocketMessageType.Text,
true,
CancellationToken.None);
}
}
}
If Monitor.Enter simply uses a reference pointer to the underlying object in memory, there would theoretically be no side effects to the fact that it is a big, complex object, instead of a tiny little new object().
Does anyone have any data on this?
Edit: After some answers below, I've come up with an alternative pattern, extending the WebSocket example. Any further feedback would be appreciated.
A thin wrapper around the underlying object allows for the creation of a private readonly object to use for locking.
The async method inside the lock is made synchronous.
Note that this pattern doesn't take into account the suggestion of only allowing a single thread to have access to the WebSocket connection (through a queue system) -- I'm mostly trying to work through my understanding of a locking pattern with a specific example.
public class SocketWrapper
{
private readonly object _sync = new object();
public WebSocket Socket { get; private set; }
public SocketWrapper(WebSocket socket)
{
this.Socket = socket;
}
public async Task SendMessage(ArraySegment<byte> data)
{
await Task.Yield();
lock (this._sync)
{
var t = await this.Socket.SendAsync(
data,
WebSocketMessageType.Text,
true,
CancellationToken.None);
t.Wait();
}
}
}
The lock mechanism uses the header of the object to lock on, it doesn't matter how complex the object is because the header is what the mechanism is using. However a good rule of thumb for locks.
Most of the time should only be locking on read-only references
Create a new private object for your locks for clarity and because someone might be locking on themselves see this answer for more information
Don't make your locks static unless your Method is locking at a program level
You can read more about the lock keyword and Monitor.Enter on MSDN:
Monitor.Enter Method (Object)
https://msdn.microsoft.com/en-us/library/c5kehkcz.aspx
This is fine. .NET uses a bit of the Object header to effectively create and use a spinlock, or if that fails, it uses a pool of Semaphores.
In either case, it's based on the underlying Object header that all objects in .NET have. It doesn't matter how complex or simple the containing object is.
My guess is that Montor.Enter uses some sort of reference pointer to the object that lives in memory to build some internal dictionary of what is locked by what thread. Not sure if this is correct, however.
As others have noted, there's actually a Monitor built-into every single .NET reference type. There's not an actual "dictionary" (or any other collection) of what is held by any thread.
I'm wondering if there are any ramifications of using a more complex object in the Monitor.Enter parameter.
Using any reference type is fine. However...
multiple threads were trying to broadcast to a WebSocket
In this kind of situation, queueing is preferred. In particular, await cannot exist inside a lock. It's possible to do a kind of implicit queueing by using an async-compatible lock, but that's a whole other story.
Also, it's not recommended to lock on an argument. If this example was synchronous, it would still be not recommended:
// NOT recommended
public void SendMessage(WebSocket socket, ArraySegment<byte> data)
{
lock (socket)
...
}
There are some lock guidelines that have developed over the years:
Locks should be private. Since any code can take a lock, as soon as you lock an instance that is accessible by any other code, you open up the possibility of a deadlock. Note that it is the privacy that is important in this rule, so lock(this) is generally understood to be "not recommended", but the reason is not because you "shouldn't lock this", but rather because "this is not private, and you should only lock private instances".
Never call arbitrary code while holding a lock. This includes raising events or invoking callbacks. Again, this opens up the possibility of a deadlock.
The code within a lock (in a "critical section") should be as short as possible.
It's generally best to have an explicit "mutex" object (i.e., _sync), for code readability and maintainability.
The "mutex" should be documented as to what other object(s) it is protecting.
Avoid code that needs to take multiple locks. If this is unavoidable, establish and document a lock hierarchy so that locks are always acquired in the same order.
These rules naturally result in the common mutex code:
private readonly object _sync = new object();
I'm designing a base class that, when inherited, will provide business functionality against a context in a multithreaded environment. Each instance may have long-running initialization operations, so I want to make the objects reusable. In order to do so, I need to be able to:
Assign a context to one of these objects to allow it to do its work
Prevent an object from being assigned a new context while it already has one
Prevent certain members from being accessed while the object doesn't have a context
Also, each context object can be shared by many worker objects.
Is there a correct synchronization primitive that fits what I'm trying to do? This is the pattern I've come up with that best fits what I need:
private Context currentContext;
internal void BeginProcess(Context currentContext)
{
// attempt to acquire a lock; throw if the lock is already acquired,
// otherwise store the current context in the instance field
}
internal void EndProcess()
{
// release the lock and set the instance field to null
}
private void ThrowIfNotProcessing()
{
// throw if this method is called while there is no lock acquired
}
Using the above, I can protect base class properties and methods that shouldn't be accessed unless the object is currently in the processing state.
protected Context CurrentContext
{
get
{
this.ThrowIfNotProcessing();
return this.context;
}
}
protected void SomeAction()
{
this.ThrowIfNotProcessing();
// do something important
}
My initial though was to use Monitor.Enter and related functions, but that doesn't prevent same-thread reentrancy (multiple calls to BeginProcess on the original thread).
There is one synchronization object in .NET that isn't re-entrant, you are looking for a Semaphore.
Before you commit to this, do get your ducks in a row and ask yourself how it can be possible that BeginProcess() can be called again on the same thread. That is very, very unusual, your code has to be re-entrant for that to happen. This can normally only happen on a thread that has a dispatcher loop, the UI thread of a GUI app is a common example. If this is truly possible and you actually use a Semaphore then you'll get to deal with the consequence as well, your code will deadlock. Since it recursed into BeginProcess and stalls on the semaphore. Thus never completing and never able to call EndProcess(). There's a good reason why Monitor and Mutex are re-entrant :)
You can use Semaphore class which came with .NET Framework 2.0.
A good usage of Semaphores is to synchronize limited amount of resources. In your case it seems you have resources like Context which you want to share between consumers.
You can create a semaphore to manage the resources like:
var resourceManager = new Semaphore(0, 10);
And then wait for a resource to be available in the BeginProcess method using:
resourceManager.WaitOne();
And finally free the resource in the EndProcess method using:
resourceManager.Release();
Here's a good blog about using Semaphores in a situation like yours:
https://web.archive.org/web/20121207180440/http://www.dijksterhuis.org/using-semaphores-in-c/
The Interlocked class can be used for a thread-safe solution that exits the method, instead of blocking when a re-entrant call is made. Like Vlad Gonchar solution, but thread-safe.
private int refreshCount = 0;
private void Refresh()
{
if (Interlocked.Increment(ref refreshCount) != 1) return;
try
{
// do something here
}
finally
{
Interlocked.Decrement(ref refreshCount);
}
}
There is very simple way to prevent re-entrancy (on one thread):
private bool bRefresh = false;
private void Refresh()
{
if (bRefresh) return;
bRefresh = true;
try
{
// do something here
}
finally
{
bRefresh = false;
}
}
I have a question about locking and whether I'm doing it right.
In a class, I have a static lock-object which is used in several methods, assume access modifiers are set appropriately, I won't list them to keep it concise.
class Foo
{
static readonly object MyLock = new object();
void MethodOne()
{
lock(MyLock) {
// Dostuff
}
}
void MethodTwo()
{
lock(MyLock) {
// Dostuff
}
}
}
Now, the way I understand it, a lock guarantees only one thread at a time will be able to grab it and get into the DoStuff() part of one method.
But is it possible for the same thread to call MethodOne() and MethodTwo() at the same time? Meaning that he uses the lock he has gotten for both methods?
My intended functionality is that every method in this class can only be called by a single thread while no other method in this class is currently executing.
The underlying usage is a database class for which I only want a single entry and exit point. It uses SQL Compact, so if I attempt to read protected data I get all sorts of memory errors.
Let me just add that every once and a while a memory exception on the database occurs and I don't know where it's coming from. I thought it was because of one thread doing multiple things with the database before completing things, but this code seems to work like it should.
But is it possible for the same thread to call MethodOne() and MethodTwo() at the same time?
No. Same thread can't call both the methods at same time whether lock is used on not.
lock(MyLock)
It can be understood as following:
MyLock object has a key to enter itself. a thread (say t1) who accesses it first gets it. Other threads will have to wait until t1 releases it. But t1 can call another method and will pass this line as it already has acquired lock.
But, at the same time calling both the methods ... not possible by single thread. Not in current programming world.
the way I understand it, a lock guarantees only one thread at a time will be able to grab it and get into the DoStuff() part of one method.
Your understanding is correct but remember that threads are used to do parallel execution but the execution within a thread is always sequential.
But is it possible for the same thread to call MethodOne() and MethodTwo() at the same time?
It is not possible for a single thread to call anything at the same time.
In a multithreaded application, this can happen - the methods can be called simultaneously, but the // Dostuff sections can only be accessed sequentially.
My intended functionality is that every method in this class can only be called by a single thread while no other method in this class is currently executing.
Then don't use additional threads in your application - just have the main one and don't use extra ones.
The only way for a thread inside Dostuff of the running MethodOne to call MethodTwo is for the Dostuff of the MethodOne to make the call to MethodTwo. If this is not happening (i.e. methods in your "mutually locked" group do not call each other), you are safe.
There are a few things that can be answered here.
But is it possible for the same thread to call MethodOne() and
MethodTwo() at the same time? Meaning that he uses the lock he has
gotten for both methods?
No, a thread has a single program counter, its either in MethodOne() or in MethodTwo(). If however you have something as follows,
public void MethodThree()
{
lock (MyLock)
{
MethodOne();
MethodTwo();
}
}
That will also work, a thread can acquire the same lock multiple times. Just watch out for what you're doing as you can easily get into a deadlock as the code becomes more complex.
My intended functionality is that every method in this class can only
be called by a single thread while no other method in this class is
currently executing.
The underlying usage is a database class for which I only want a
single entry and exit point. It uses SQL Compact, so if I attempt to
read protected data I get all sorts of memory errors.
I don't really understand why, but if you think you need to do this because you're using SqlCompact, you're wrong. You should be using transactions which are supported on SqlCe.
E.g.
using (var connection = new SqlCeConnection())
using (var command = new SqlCeCommand())
using (var transaction = conn.BeginTransaction())
{
command.Transaction = transaction;
command.ExecuteNonQuery();
transaction.Commit();
}
I have a method which should be executed in an exclusive fashion. Basically, it's a multi threaded application where the method is invoked periodically by a timer, but which could also be manually triggered by a user action.
Let's take an example :
The timer elapses, so the method is
called. The task could take a few
seconds.
Right after, the user clicks on some
button, which should trigger the
same task : BAM. It does nothing
since the method is already running.
I used the following solution :
public void DoRecurentJob()
{
if(!Monitor.TryEnter(this.lockObject))
{
return;
}
try
{
// Do work
}
finally
{
Monitor.Exit(this.lockObject);
}
}
Where lockObject is declared like that:
private readonly object lockObject = new object();
Edit : There will be only one instance of the object which holds this method, so I updated the lock object to be non-static.
Is there a better way to do that ? Or maybe this one is just wrong for any reason ?
This looks reasonable if you are just interested in not having the method run in parallel. There's nothing to stop it from running immediately after each other, say that you pushed the button half a microsecond after the timer executed the Monitor.Exit().
And having the lock object as readonly static also make sense.
You could also use Mutex or Semaphore if you want it to work cross process (with a slight performance penalty), or if you need to set any other number than one of allowed simultaneous threads running your piece of code.
There are other signalling constructs that would work, but your example looks like it does the trick, and in a simple and straightforward manner.
Minor nit: if the lockObject variable is static, then "this.lockObject" shouldn't compile. It also feels slightly odd (and should at least be heavily documented) that although this is an instance method, it has distinctly type-wide behaviour as well. Possibly make it a static method which takes an instance as the parameter?
Does it actually use the instance data? If not, make it static. If it does, you should at least return a boolean to say whether or not you did the work with the instance - I find it hard to imagine a situation where I want some work done with a particular piece of data, but I don't care if that work isn't performed because some similar work was being performed with a different piece of data.
I think it should work, but it does feel a little odd. I'm not generally a fan of using manual locking, just because it's so easy to get wrong - but this does look okay. (You need to consider asynchronous exceptions between the "if" and the "try" but I suspect they won't be a problem - I can't remember the exact guarantees made by the CLR.)
I think Microsoft recommends using the lock statement, instead of using the Monitor class directly. It gives a cleaner layout and ensures the lock is released in all circumstances.
public class MyClass
{
// Used as a lock context
private readonly object myLock = new object();
public void DoSomeWork()
{
lock (myLock)
{
// Critical code section
}
}
}
If your application requires the lock to span all instances of MyClass you can define the lock context as a static field:
private static readonly object myLock = new object();
The code is fine, but would agree with changing the method to be static as it conveys intention better. It feels odd that all instances of a class have a method between them that runs synchronously, yet that method isn't static.
Remember you can always have the static syncronous method to be protected or private, leaving it visible only to the instances of the class.
public class MyClass
{
public void AccessResource()
{
OneAtATime(this);
}
private static void OneAtATime(MyClass instance)
{
if( !Monitor.TryEnter(lockObject) )
// ...
This is a good solution although I'm not really happy with the static lock. Right now you're not waiting for the lock so you won't get into trouble with deadlocks. But making locks too visible can easily get you in to trouble the next time you have to edit this code. Also this isn't a very scalable solution.
I usually try to make all the resources I try to protect from being accessed by multiple threads private instance variables of a class and then have a lock as a private instance variable too. That way you can instantiate multiple objects if you need to scale.
A more declarative way of doing this is using the MethodImplOptions.Synchronized specifier on the method to which you wish to synchronize access:
[MethodImpl(MethodImplOptions.Synchronized)]
public void OneAtATime() { }
However, this method is discouraged for several reasons, most of which can be found here and here. I'm posting this so you won't feel tempted to use it. In Java, synchronized is a keyword, so it may come up when reviewing threading patterns.
We have a similar requirement, with the added requirement that if the long-running process is requested again, it should enqueue to perform another cycle after the current cycle is complete. It's similar to this:
https://codereview.stackexchange.com/questions/16150/singleton-task-running-using-tasks-await-peer-review-challenge
private queued = false;
private running = false;
private object thislock = new object();
void Enqueue() {
queued = true;
while (Dequeue()) {
try {
// do work
} finally {
running = false;
}
}
}
bool Dequeue() {
lock (thislock) {
if (running || !queued) {
return false;
}
else
{
queued = false;
running = true;
return true;
}
}
}