C# NamedPipe thread safety on async calls - c#

I wonder how to guarantee thread-safety on pipes during async operations.
For example this code is being executed by a thread (the stream has been created properly before):
pipeClientStream.ConnectAsync(cancel).Wait();
Meanwhile another thread wants to know if the pipe is (already) connected
bool isConnected = pipeClientStream.IsConnected;
I didn't find a notice about thread-safety in the microsoft docs, but I guess if it was thread-safe there would be a hint. I also dived into reference source of the Pipe.cs and PipeStream.cs to look for locks but there where none to be found.
So in the end the only "safe" way would be to create a "locked" boolean value set by the thread which is working with the pipe and encapsulate the pipe.
What do you think is proper way of dealing with such scenarios?

the proper way to deal with most non-threadsafe object is to lock it:
lock(pipeClientStream){
return pipeClientStream.IsConnected;
}
Locking a shared resource like this is fine, but it it also common to use a separate object to control access to some resource. If you are using actual asynchronous code you might need some other solution.
do not try to create your own "lock" by using any kind of shared boolean. There is a huge risk you will get something wrong and end up with non threadsafe code.
There are also named mutexes if you need synchronization between processes, but that is not needed if you keep to a single process.

Related

C# Threading - Using a class in a thread-safe way vs. implementing it as thread-safe

Suppose I want to use a non thread-safe class from the .Net Framework (the documentation states that it is not thread-safe). Sometimes I change the value of Property X from one thread, and sometimes from another thread, but I never access it from two threads at the same time. And sometimes I call Method Y from one thread, and sometimes from another thread, but never at the same time.
Is this means that I use the class in a thread-safe way, and the fact that the documentation state that it's not thread-safe
is no longer relevant to my situation?
If the answer is No: Can I do everything related to a specific object in the same thread - i.e, creating it and calling its members always in the same thread (but not the GUI thread)? If so, how do I do that? (If relevant, it's a WPF app).
No, it is not thread safe. As a general rule, you should never write multi threaded code without some kind of synchronization. In your first example, even if you somehow manage to ensure that modifying/reading is never done at the same time, still there is a problem of caching values and instructions reordering.
Just for example, CPU caches values into a register, you update it on one thread, read it from another. If the second one has it cached, it doesn't go to RAM to fetch it and doesn't see the updated value.
Take a look at this great post for more info and problems with writing lock free multi threaded code link. It has a great explanation how CPU, compiler and CLI byte code compiler can reorder instructions.
Suppose I want to use a non thread-safe class from the .Net Framework (the documentation states that it is not thread-safe).
"Thread-safe" has a number of different meanings. Most objects fall into one of three categories:
Thread-affine. These objects can only be accessed from a single thread, never from another thread. Most UI components fall into this category.
Thread-safe. These objects can be accessed from any thread at any time. Most synchronization objects (including concurrent collections) fall into this category.
One-at-a-time. These objects can be accessed from one thread at a time. This is the "default" category, with most .NET types falling into this category.
Sometimes I change the value of Property X from one thread, and sometimes from another thread, but I never access it from two threads at the same time. And sometimes I call Method Y from one thread, and sometimes from another thread, but never at the same time.
As another answerer noted, you have to take into consideration instruction reordering and cached reads. In other words, it's not sufficient to just do these at different times; you'll need to implement proper barriers to ensure it is guaranteed to work correctly.
The easiest way to do this is to protect all access of the object with a lock statement. If all reads, writes, and method calls are all within the same lock, then this would work (assuming the object does have a one-at-a-time kind of threading model and not thread-affine).
Suppose I want to use a non thread-safe class from the .Net Framework (the documentation states that it is not thread-safe). Sometimes I change the value of Property X from one thread, and sometimes from another thread, but I never access it from two threads at the same time. And sometimes I call Method Y from one thread, and sometimes from another thread, but never at the same time.
All Classes are by default non thread safe, except few Collections like Concurrent Collections designed specifically for the thread safety. So for any other class that you may choose and if you access it via multiple threads or in a Non atomic manner, whether read / write then it's imperative to introduce thread safety while changing the state of an object. This only applies to the objects whose state can be modified in a multi-threaded environment but Methods as such are just functional implementation, they are themselves not a state, which can be modified, they just introduce thread safety for maintaining the object state.
Is this means that I use the class in a thread-safe way, and the fact that the documentation state that it's not thread-safe is no longer relevant to my situation? If the answer is No: Can I do everything related to a class in the same thread (but not the GUI thread)? If so, how do I do that? (If relevant, it's a WPF app).
For a Ui application, consider introducing Async-Await for IO based operations, like file read, database read and use TPL for compute bound operations. Benefit of Async-Await is that:
It doesn't block the Ui thread at all, and keeps Ui completely responsive, in fact post await Ui controls can be directly updated with no Cross thread concern, since only one thread is involved
The TPL concurrency too makes compute operations blocking, they summon the threads from the thread Pool and can't be used for the Ui update due to Cross thread concern
And last: there are classes in which one method starts an operation, and another one ends it. For example, using the SpeechRecognitionEngine class you can start a speech recognition session with RecognizeAsync (this method was before the TPL library so it does not return a Task), and then cancel the recognition session with RecognizeAsyncCancel. What if I call RecognizeAsync from one thread and RecognizeAsyncCancel from another one? (It works, but is it "safe"? Will it fail on some conditions which I'm not aware of?)
As you have mentioned the Async method, this might be an older implementation, based on APM, which needs AsyncCallBack to coordinate, something on the lines of BeginXX, EndXX, if that's the case, then nothing much would be required to co-ordinate, as they use AsyncCallBack to execute a callback delegate. In fact as mentioned earlier, there's no extra thread involved here, whether its old version or new Async-Await. Regarding task cancellation, CancellationTokenSource can be used for the Async-Await, a separate cancellation task is not required. Between multiple threads coordination can be done via Auto / Manual ResetEvent.
If the calls mentioned above are synchronous, then use the Task wrapper to return the Task can call them via Async method as follows:
await Task.Run(() => RecognizeAsync())
Though its a sort of Anti-Pattern, but can be useful in making whole call chain Async
Edits (to answer OP questions)
Thanks for your detailed answer, but I didn't understand some of it. At the first point you are saying that "it's imperative to introduce thread safety", but how?
Thread safety is introduced using synchronization constructs like lock, mutex, semaphore, monitor, Interlocked, all of them serve the purpose of saving an object from getting corrupt / race condition. I don't see any steps.
Does the steps I have taken, as described in my post, are enough?
I don't see any thread safety steps in your post, please highlight which steps you are talking about
At the second point I'm asking how to use an object in the same thread all the time (whenever I use it). Async-Await has nothing to do with this, AFAIK.
Async-Await is the only mechanism in concurrency, which since doesn't involved any extra thread beside calling thread, can ensure everything always runs on same thread, since it use the IO completion ports (hardware based concurrency), otherwise if you use Task Parallel library, then there's no way for you to ensure that same / given thread is always use, as that's a very high level abstraction
Check one of my recent detailed answer on threading here, it may help in providing some more detailed aspects
It is not thread-safe, as the technical risk exists, but your policy is designed to cope with the problem and work around the risk. So, if things stand as you described, then you are not having a thread-safe environment, however, you are safe. For now.

Can a method from a singleton object be called from multiple threads at the same time?

I have a component registered in Castle Windsor as a singleton. This object is being used in many other places within my application which is multithreaded.
Is it possible that the two objects will invoke the same method from that singleton at the same time or 'calling it' will be blocked until the previous object will get result?
Thanks
You can call a Singleton object method from different threads at the same time and they would not be blocked if there is no locking/ synchronization code. The threads would not wait for others to process the result and would execute the method as they would execute methods on separate objects.
This is due to the fact that each thread has a separate stack and have different sets of local variables. The rest of the method just describes the process as to what needs to be done with the data which is held the variables/fields.
What you might want to take care of is if the methods on the Singleton object access any static methods or fields/variables. In that case you might need to work on synchronization part of it. You would need to ensure multi-threaded access to shared resources for the execution of the method to be reliable.
To be able to synchronize, you might need to use lock statement or other forms of thread synchronization techniques.
You might want to refer to this article from Wikipedia which provides information on C# thread local storage as well.
You can call the same method or different methods on one object simultaneously from different threads. In the specific methods you'll need to know when sensitive variables are being accessed (mostly when member-variables are changing their values) and will need to implement locking on your own, in order to solve lost updates and other anomalies.
You can lock a part of a code with the lock-statement and here an article on how Thread-Synchronization works in .Net.
The normal version of Singleton may not be thread safe, you could see different implementation of thread safe singleton here.
http://tutorials.csharp-online.net/Singleton_design_pattern:_Thread-safe_Singleton

Multi-threading concept and lock in c#

I read about lock, though not understood nothing at all.
My question is why do we use a un-used object and lock that and how this makes something thread-safe or how this helps in multi-threading ? Isn't there other way to make thread-safe code.
public class test {
private object Lock { get; set; }
...
lock (this.Lock) { ... }
...
}
Sorry is my question is very stupid, but i don't understand, although i've used it many times.
Accessing a piece of data from one thread while other thread is modifying it is called "data race condition" (or just "data race") and can lead to corruption of data. (*)
Locks are simply a mechanism for avoiding data races. If two (or more) concurrent threads lock the same lock object, then they are no longer concurrent and can no longer cause data races, for the duration of the lock. Essentially, we are serializing the access to shared data.
The trick is to keep your locks as "wide" as you must to avoid data races, yet as "narrow" as you can to gain performance through concurrent execution. This is a fine balance that can easily go out of whack in either direction, which is why multi-threaded programming is hard.
Some guidelines:
As long all threads are just reading the data and none will ever modify it, lock is unnecessary.
Conversely, if at least one thread might at some point modify the data, then all concurrent code paths accessing that same data must be properly serialized through locks, even those that only read the data.
Using a lock in one code path but not the other will leave the data wide open to race conditions.
Also, using one lock object in one code path, but a different lock object in another (concurrent) code path does not serialize these code paths and leaves you wide open to data races.
On the other hand, if two concurrent code paths access different data, they can use different lock objects. But, whenever there is more than one lock object, watch out for deadlocks. A deadlock is often also a "code race condition" (and a heisenbug, see below).
The lock object does not need to be (and usually isn't) the same thing as the data you are trying to protect. Unfortunately, there is no language facility that lets you "declare" which data is protected by which lock object, so you'll have to very carefully document your "locking convention" both for other people that might maintain your code, and for yourself (since even after a short time you will forget some of the nooks and crannies of your locking convention).
It's usually a good idea to protect the lock object from the outside world as much as you can. After all, you are using it for the very sensitive task of locking and you don't want it locked by external actors in unforeseen ways. That's why using this or a public field as a lock object is usually a bad idea.
The lock keyword is simply a more convenient syntax for Monitor.Enter and Monitor.Exit.
The lock object can be any object in .NET, but value objects will be boxed in the call to Monitor.Enter, which means threads will not share the same lock object, leaving the data unprotected. Therefore, only use reference types as lock objects.
For inter-process communication you can use a global mutex, which can be created by passing a non-empty name to Mutex Constructor. Global mutexes provide essentially the same functionality as regular "local" locking, except they can be shared between separate processes.
There are synchronization mechanisms other than locks, such as semaphores, condition variables, message queues or atomic operations. Be careful when mixing different synchronization mechanisms.
Locks also behave as memory barriers, which is increasingly important on modern multi-core, multi-cache CPUs. This is part of the reason why you need locks on reading the data and not just writing.
(*) It is called "race" because concurrent threads are "racing" towards performing an operation on the shared data and whoever wins that race determines the outcome of the operation. So the outcome depends on timing of the execution, which is essentially random on modern preemptive multitasking OSes. Worse yet, timing is easily modified by a simple act of observing the program execution through tools such as debugger, which makes them "heisenbugs" (i.e. the phenomenon being observed is changed by the mere act of observation).
Lock object is like a door into the single room where only one guest per time can enter.
The room can be your data, the guest can be your function.
define data (room)
add door (lock object)
invite guests (functions)
using lock insctruction close/open door to allow only one guest per time enter into the room.
Why we need this? If you simulatniously write a data in a file (just an example, can be 1000s others) you will need to sync an access of your funcitons (close/open door for guests) to the write file, so any function will append to the end of the file (assuming that is requierement of this example)
This is naturally not only way sync the threads, there are more out there:
Monitors
Wait hadlers
...
Check out the link for complete information and description of each of them
Thread Synchronization
Yes, there is indeed another way:
using System.Runtime.CompilerServices;
class Test
{
private object Lock { get; set; }
[MethodImpl(MethodImplOptions.Synchronized)]
public void Foo()
{
// Now this instance is locked
}
}
While it looks more "natural", it's not used often, because of the fact that the object is locking on itself this way, so other code could not risk locking on this object -- it could cause a deadlock.
Because of this, you usually create a (lazy-initialized) private field referring to an object, and use that object as a lock instead. This will guarantee that no one else can lock against the same object as you.
A little more detail on what's happening beneath the hood:
When you "lock on an object", you're not locking on the object itself. Rather, you're using the object as a guaranteed-to-be-unique-address-in-memory throughout your program. When you "lock", the runtime takes the object's address, uses it to look up the actual lock inside another table (which is hidden from you), and uses that object as the ""lock" (also known as a "critical section").
So really, for you, an object is just a proxy/symbol -- it isn't doing anything by itself; it's just acting as a unique indicator that will never clash with another valid object in the same program.
When you have different threads accessing same variable/resource at the same time they may over write on this variable/resource and you can have unexpected results. Lock will make sure only one thread can assess variable at on time and remain thread will queue to get access to this variable/resource till lock is released
suppose we have balance variable of an account.
Two different thread read its value which was 100
Suppose first thread adds 50 to it like 100 + 50 and saves it and balance will have 150
As second thread already read 100 and mean while. suppose it subtract 50 like 100-50 but point to note here is that first thread has made the balance 150 so second thread should to 150-50 this could cause serious problems.
So lock makes sure that when on thread wants to change some resource states it locks it and leaves after committing change
The lock statement introduces the concept of mutual exclusion. Only one thread can acquire a lock on a given object at any one time. This prevents threads from accessing shared data structures concurrently, thus corrupting them.
If other threads already hold a lock, the lock statement will block until it is able to acquire an exclusive lock on its argument before allowing its block to execute.
Note that the only thing lock does is control entry to the block of code. Access to members of the class is completely unrelated to the lock. It is up to the class itself to ensure that accesses that must be synchronized are coordinated by the use of lock or other synchronization primitives. Also note that access to some or all members may not have to be synchronized. For instance, if you want to maintain a counter, you could use the Interlocked class without locking.
An alternative to locking is lock-free data structures, which behave correctly in the presence of multiple threads. Operations on lock-free data structures must be designed very carefully, usually with the assistance of lock-free primitives such as compare-and-swap (CAS).
The general theme of such techniques is to try to perform operations on data structures atomically and detect when operations fail due to concurrent actions by other threads, followed by retries. This works well on a lightly loaded system where failures are unlikely, but can produce runaway behaviour as the failure rate climbs and retries become a dominant load. This problem can be ameliorated by backing off the retry rate, effectively throttling the load.
A more sophisticated alternative is software transactional memory. Unlike CAS, STM generalizes the concept of fail-and-retry to arbitrarily complex memory operations. In simple terms, you start a transaction, perform all your operations, and finally commit. The system detects if the operations cannot succeed due to conflicting operations performed by other threads that beat the current thread to the punch. In such cases, STM can either fail outright, requiring the application to take corrective action, or, in more sophisticated implementations, it can automatically go back to the start of the transaction and try again.
Your confusion is pretty typical for those just getting familiar with the lock keyword in C#. You are right, the object used in the lock statement is really nothing more than a token that defines a critical section. That object, in no way, has any protection from multithreaded access itself.
The way this works is that the CLR reserves a 4 byte (32-bit systems) section in the object header (type handle) called the sync block. The sync block is nothing more than an index into an array that stores the actual critical section information. When you use the lock keyword the CLR will modify this sync block value accordingly.
There are advantages and disadvantages to this scheme. The advantage is that it made for a fairly elegant solution to defining critical sections. One obvious disadvantage is that each object instance contains the sync block and most instances never use it so it would seem to be a waste of space in most cases. Another disadvantage is that boxed value types can be used which is almost always wrong and certainly leads to confusion.
I remember way back when .NET was first released that there was a lot of chatter over whether the lock keyword was good or bad for the language. The general consensus (at least as I remember it) was that it was bad because the using keyword could have been easily used instead. In fact, a solution that used the using keyword actually would have made more sense because it could have been done without the need for the sync block. The c# design team even went on record to say that had they been given a second chance the lock keyword never would have made it into the language.1
1The only reference I could find for this is on Jon Skeet's website here.

Forms GUI Video thread communications

I am creating a custom video player in a C# form. At present the player has a initialising and shutdown routines and a thread running in the background reading video frames and displaying them. I am fairly new to C# so I am trying to establish how best to send the start\stop\pause commands from the GUI thread to the video thread. Should I just use a state variable protected with a lock and poll this every time round my video thread; are there
any other recommendations?
Thanks.
A polled state variable would seem the easiest solution providing your video thread loops regularly enough.
You may not even need a lock, making the state variable volatile should be sufficient in C# providing only one thread updates it. (volatile in C# has slightly different semantics than C, and should guarantee that the other thread picks up the new value)
Several ways are possible. As you are new to C# and probably tightly coupled to UI, I suggest you to use BackgroundWorker class.
http://msdn.microsoft.com/en-us/library/cc221403(v=vs.95).aspx
You can pass your arguments using DoWorkEventArgs of DoWork event.
Also with such approach and without shared objects (through threads) you can avoid using lock or synchronization
I think it could be best solution for you, but there are alternatives. You can use Asynchronouse Programming Model (APM), or even Thread/ThreadPool, or Task Parallel Library.
Should I just use a state variable protected with a lock and poll this every time round my >>video thread; are there any other recommendations?
If you have shared state like a video thread, than you should use thread synchronization. So, answer will be yes, you should use some protected variable, you can avoid locking by using just volatile, but consider to use other sync primitives. Because using volatile just ensures that you are reading/writing most actual value, but it doesn't prevents other thread from reading/writing.
Some links to choose whether to use lock(other primitives) or just a volatile:
Do I need to lock or mark as volatile when accessing a simple boolean flag in C#?
Volatile vs. Interlocked vs. lock
You should be able to call start/stop/pause the DirectShow filter graph without limitations. This would cause respective method calls on your source filter (for more information, see Filter States). The source filter does need to notify the background thread about the state change, if this has not been done already.
The synchronization can be implemented in the same way as in DirectShow classes, where two AutoResetEvent instances are used in the filter, one to notify background thread about new request, and one to notify calling thread about the request processing being completed.

What is the difference between lock and Mutex?

What is the difference between lock and Mutex? Why can't they be used interchangeably?
A lock is specific to the AppDomain, while Mutex to the Operating System allowing you to perform inter-process locking and synchronization (IPC).
lock is a compiler keyword, not an actual class or object. It's a wrapper around the functionality of the Monitor class and is designed to make the Monitor easier to work with for the common case.
The Monitor (and the lock keyword) are, as Darin said, restricted to the AppDomain. Primarily because a reference to a memory address (in the form of an instantiated object) is required to manage the "lock" and maintain the identity of the Monitor
The Mutex, on the other hand, is a .Net wrapper around an operating system construct, and can be used for system-wide synchronization, using string data (instead of a pointer to data) as its identifier. Two mutexes that reference two strings in two completely different memory addresses, but having the same data, will actually utilize the same operating-system mutex.
A Mutex can be either local to a process or system-wide. MSDN:
Mutexes are of two types: local mutexes, which are unnamed, and named system mutexes. A local mutex exists only within your process.
Furthermore, one should take special care - detailed on the same page as well - when using a system-wide mutex on a system with Terminal Services.
One of the differences between Mutex and lock is that Mutex utilizes a kernel-level construct, so synchronization will always require at least a user space-kernel space transition.
lock - that is really a shortcut to the Monitor class, on the other hand tries to avoid allocating kernel resources and transitioning to kernel code (and is thus leaner & faster - if one has to find a WinAPI construct that it resembles, it would be CriticalSection).
The other difference is what others point out: a named Mutex can be used across processes.
Unless one has special needs or requires synchronization across processes, it is just better to stick to lock (aka Monitor)˛
There are several other "minor" differences, like how abandonment is handled, etc.
The same can be said about ReaderWriterLock and ReaderWriterLockSlim in 3.5, Semaphore and the new SemaphoreSlim in .NET 4.0 etc.
It is true that the latter xxSlim classes cannot be used as a system-wide sync primitives, but they were never meant to - they were "only" meant to be faster and more resource friendly.
I use a Mutex to check see if I already have a copy of the application running on the same machine.
bool firstInstance;
Mutex mutex = new Mutex(false, #"Local\DASHBOARD_MAIN_APPLICATION", out firstInstance);
if (!firstInstance)
{
//another copy of this application running
}
else
{
//run main application loop here.
}
// Refer to the mutex down here so garbage collection doesn't chuck it out.
GC.KeepAlive(mutex);
A lot has been said already, but to make it simple, here's my take.
lock -> Simple to use, wrapper on monitor, locks across threads in an AppDomain.
unnamed mutex -> similar to lock except locking scope is more and it's across AppDomain in a process.
Named mutex -> locking scope is even more than unnamed mutex and it's across process in an operating system.
So now options are there, you need to choose the one fits best in your case.
Mutex is a cross process and there will be a classic example of not running more than one instance of an application.
2nd example is say you are having a file and you don't want different process to access the same file , you can implement a Mutex but remember one thing Mutex is a operating system wide and cannot used between two remote process.
Lock is a simplest way to protect section of your code and it is appdomain specific , you can replace lock with Moniters if you want more controlled synchronization.
Few more minor differences which were not mentioned in the answers:
In the case of using locks, you can be sure that the lock will be released when an exception happens inside the lock's block.
That's because the lock uses monitors under the hood and is implemented this way:
object __lockObj = x;
bool __lockWasTaken = false;
try
{
System.Threading.Monitor.Enter(__lockObj, ref __lockWasTaken);
// Your code...
}
finally
{
if (__lockWasTaken) System.Threading.Monitor.Exit(__lockObj);
}
Thus, in any case, the lock is released, and you don't need to release it manually (like you'd do for the mutexes).
For Locks, you usually use a private object to lock (and should use).
This is done for many reasons. (More info: see this answer and official documentation).
So, in case of locks, you can't (accidentally gain) access to the locked object from the outside and cause some damage.
But in case of Mutex, you can, as it's common to have a Mutex which is marked public and used from anywhere.
The Lock and Monitors are basically used to provide thread safety for threads that are generated by the application itself i.e. Internal Threads. On the other hand, Mutex ensures thread safety for threads that are generated by the external applications i.e. External Threads. Using Mutex, only one external thread can access our application code at any given point in time.
read this

Categories