How does static code run with multiple threads? - c#

I was reading Threading from within a class with static and non-static methods and I am in a similar situation.
I have a static method that pulls data from a resource and creates some runtime objects based on the data.
static class Worker{
public static MyObject DoWork(string filename){
MyObject mo = new MyObject();
// ... does some work
return mo;
}
}
The method takes awhile (in this case it is reading 5-10mb files) and returns an object.
I want to take this method and use it in a multiple thread situation so I can read multiple files at once. Design issues / guidelines aside, how would multiple threads access this code?
Let's say I have something like this...
class ThreadedWorker {
public void Run() {
Thread t = new Thread(OnRun);
t.Start();
}
void OnRun() {
MyObject mo = Worker.DoWork("somefilename");
mo.WriteToConsole();
}
}
Does the static method run for each thread, allowing for parallel execution?

Yes, the method should be able to run fine in multiple threads. The only thing you should worry about is accessing the same file in multiple threads at the same time.

You should distinguish between static methods and static fields in this case. Each call to a static method will have its own "copy" of the method and its local variables. That means that in your sample, each call will operate on its own MyObject instance, and the calls will have nothing to do with each other. This also means that there is no problem with executing them on different threads.

If the static method is written to be thread safe, then it can be called from any thread or even passed to a thread pool.
You have to keep in mind - .NET objects don't live on threads (with the exception of structs located on a thread's stack) - paths of execution do. So, if a thread can access an instance of an object it can call an instance method. Any thread can call a static method because it all needs to know about is the type of the object.

One thing you should keep in mind when executing static methods concurrently are static fields, which only exist one time. So, if the method reads and writes static fields, concurrence issues can occur.
However, there is an attribute called ThreadStaticAttribute which says that for each thread there is a separate field. This can be helpful in some particular scenarios.
Local variables are separte for each thread, so you don't need to care about this. But be aware of external resources like files, which can be problematic when accessed concurrently.
Best Regards,
Oliver Hanappi

Aside from the code aspect, which has already been answered, you also need to consider the I/O aspect of accessing the file.
A note on architecture and how I have completed this task in the past - not suggesting that this is the one right approach or that it is necessarily appropriate for your application. However, I thought my notes might be helpful for your thought process:
Set up a ManualResetEvent field, call it ActivateReader or something similar, this will become more obvious further on. Initialize it as false.
Set up a boolean field, call it TerminateReaderThread. Initialize it as false, again this will become more obvious further on.
Set up a Queue<string> field, call it Files and initialize it.
My main application thread checks to see if there's a lock on the files queue before writing each of the relevant file paths into it. Once the file's been written, the reset event is tripped indicating to the queue reader thread that there are unread files in the queue.
I then set up a thread to act as a queue reader. This thread waits for the ManualResetEvent to be tripped using the WaitAny() method - this is a blocking method that unblocks once the ManualResetEvent is tripped. Once it is tripped, the thread checks to see if a thread shutdown has been initiated [by checking the TerminateReaderThread field]. If a shutdown has been initiated, the thread shuts down gracefully, otherwise it reads the next item from the queue and spawns a worker thread to process the file. I then lock the queue before checking to see if there's any items left. If no items are left, I reset the ManualResetEvent which will pause our thread on the next go-around. I then unlock the queue so the main thread can continue writing to it.
Each instance of the worker thread attempts to gain an exclusive lock on the file it was initiated with until some timeout elapses, if the lock is successful, it processes the file, if it's unsuccessful, it either retries as necessary, throws an exception and terminates itself. In the event of an exception, the thread can add the file to the end of the queue so another thread can pick it up again at a later point. Be aware that if you do this, then you need to consider the endless loop an I/O read issue could cause. In such an event a dictionary of failed files with counters of how many times they've failed could be useful so that if some limit was reached you could cease to re-add the file to the end of the queue.
Once my application decides the reader thread is no longer needed, it sets the TerminateReaderThread field to true. Next time the reader thread cycles to the start of its process, its shutdown process will be activated.

The static method will run on the thread that you call it from. As long as your function is re-entrant, meaning execution can safely re-enter the function while execution from another thread (or further up the stack) is already in the function.
Since your function is static, you can't access member variables, which would be one way of making it not re-entrant. If you had a static local variable that maintained state, that would be another way of making it not re-entrant.
Each time you enter you create a new MyObject, so each bit of execution flow is dealing with it's own MyObject instance, which is good. It means they won't be trying to access the same object at the same time (which would lead to race-conditions).
The only thing you're sharing between multiple calls is the Console itself. If you call it on multiple threads, they'll output over each other to the console. And you could potentially act on the same file (in your example the filename is hard-coded), but you'd probably be acting on multiple files. Successive threads would probably fail to open the file if previous ones have it open.

Related

Is it possible to modify an object passed like a parameter to other thread with C#? [duplicate]

is there any way in c# to put objects in another thread? All I found is how to actually execute some methods in another thread. What I actually want to do is to instanciate an object in a new thread for later use of the methods it provides.
Hope you can help me,
Russo
Objects do not really belong to a thread. If you have a reference to an object, you can access it from many threads.
This can give problems with object that are not designed to be accessed from many threads, like (almost all) System.Windows.Forms classes, and access to COM objects.
If you only want to access an object from the same thread, store a reference to the thread in the object (or a wrapping object), and execute the methods via that thread.
There seems to be some confusion about how threads work here, so this is a primer (very short too, so you should find more material before venturing further into multi-threaded programming.)
Objects and memory are inherently multi-thread in the sense that all threads in a process can access them as they choose.
So objects do not have anything to do with threads.
However, code executes in a thread, and it is the thread the code executes in that you're probably after.
Unfortunately there is no way to just "put an object into a different thread" as you put it, you need to specifically start a thread and specify what code to execute in that thread. Objects used by that code can thus be "said" to belong to that thread, though that is an artificial limit you impose yourself.
So there is no way to do this:
SomeObject obj = new SomeObject();
obj.PutInThread(thatOtherThread);
obj.Method(); // this now executes in that other thread
In fact, a common trap many new multi-thread programmers fall into is that if they create an object in one thread, and call methods on it from another thread, all those methods execute in the thread that created the object. This is incorrect, methods always executes in the thread that called them.
So the following is also incorrect:
Thread 1:
SomeObject obj = new SomeObject();
Thread 2:
obj.Method(); // executes in Thread 1
The method here will execute in Thread 2. The only way to get the method to execute in the original thread is to cooperate with the original thread and "ask it" to execute that method. How you do that depends on the situation and there's many many ways to do this.
So to summarize what you want: You want to create a new thread, and execute code in that thread.
To do that, look at the Thread class of .NET.
But be warned: Multi-threaded applications are exceedingly hard to get correct, I would not add multi-threaded capabilities to a program unless:
That is the only way to get more performance out of it
And, you know what you're doing
All threads of a process share the same data (ignoring thread local storage) so there is no need to explicitly migrate objects between threads.
internal sealed class Foo
{
private Object bar = null;
private void CreateBarOnNewThread()
{
var thread = new Thread(this.CreateBar);
thread.Start();
// Do other stuff while the new thread
// creates our bar.
Console.WriteLine("Doing crazy stuff.");
// Wait for the other thread to finish.
thread.Join();
// Use this.bar here...
}
private void CreateBar()
{
// Creating a bar takes a long time.
Thread.Sleep(1000);
this.bar = new Object();
}
}
All threads can see the stack heap, so if the thread has a reference to the objects you need (passed in through a method, for example) then the thread can use those objects. This is why you have to be very careful accessing objects when multi-threading, as two threads might try and change the object at the same time.
There is a ThreadLocal<T> class in .NET that you can use to restrict variables to a specific thread: see http://msdn.microsoft.com/en-us/library/dd642243.aspx and http://www.c-sharpcorner.com/UploadFile/ddoedens/UseThreadLocals11212005053901AM/UseThreadLocals.aspx
Use ParameterizedThreadStart to pass an object to your thread.
"for later use of the methods it provides."
Using a class that contains method to execute on new thread and other data and methods, you can gain access from your thread to Data and methods from the new thread.
But ... if your execute a method from the class, you are executing on current thread.
To execute the method on the new thread needs some Thread syncronization.
System.Windows.Forms.Control.BeginInvoke do it, the Control thread is waiting until a request arrives.
WaitHandle class can help you.
There's a lot of jargon around threading, But it boils down something pretty simple.
For a simple program, you have one point of execution flowing from point a to b, one line at a time. Programming 101, right?
Ok, for multithreading, You now have more then one point of execution in your program. So, point 1 can be in one part of your program, and point 2 can be someplace else.
It's all the same memory, data and code, but you have more then one thing happening at a time. So, you can think, what happens of both points enter a loop at the same time, what do you think would happen? So techniques were created to keep that kind of issue either from happening, or to speed up some kind of process. (counting a value vs. say, networking.)
That's all it really is. It can be tricky to manage, and and it's easy to get lost in the jargon and theory, but keep this in mind and it will be much simpler.
There are other exceptions to the rule as always, but this is the basics of it.
If the method that you run in a thread resides in a custom class you can have members of this class to hold the parameters.
public class Foo
{
object parameter1;
object parameter2;
public void ThreadMethod()
{
...
}
}
Sorry to duplicate some previous work, but the OP said
What I actually want to do is to instanciate an object in a new thread for later use of the methods it provides.
Let me interpret that as:
What I actually want to do is have a new thread instantiate an object so that later I can use that object's methods.
Pray correct me if I've missed the mark. Here's the example:
namespace silly
{
public static class Program
{
//declared volatile to make sure the object is in a consistent state
//between thread usages -- For thread safety.
public static volatile Object_w_Methods _method_provider = null;
static void Main(string[] args)
{
//right now, _method_provider is null.
System.Threading.Thread _creator_thread = new System.Threading.Thread(
new System.Threading.ThreadStart(Create_Object));
_creator_thread.Name = "Thread for creation of object";
_creator_thread.Start();
//here I can do other work while _method_provider is created.
System.Threading.Thread.Sleep(256);
_creator_thread.Join();
//by now, the other thread has created the _method_provider
//so we can use his methods in this thread, and any other thread!
System.Console.WriteLine("I got the name!! It is: `" +
_method_provider.Get_Name(1) + "'");
System.Console.WriteLine("Press any key to exit...");
System.Console.ReadKey(true);
}
static void Create_Object()
{
System.Threading.Thread.Sleep(512);
_method_provider = new Object_w_Methods();
}
}
public class Object_w_Methods
{
//Synchronize because it will probably be used by multiple threads,
//even though the current implementation is thread safe.
[System.Runtime.CompilerServices.MethodImpl(
System.Runtime.CompilerServices.MethodImplOptions.Synchronized)]
public string Get_Name(int id)
{
switch (id)
{
case 1:
return "one is the name";
case 2:
return "two is the one you want";
default:
return "supply the correct ID.";
}}}}
Just like to elaborate on a previous answer. To get back to the problem, objects and memory space are shared by all threads. So they are always shared, but I am assuming you want to do so safely and work with results created by another thread.
Firstly try one of the trusted C# patterns. Async Patterns
There are set patterns to work with, that do transmit basic messages and data between threads.
Usually the one threat completes after it computes the results!
Life threats: Nothing is fool proof when going asynchronous and sharing data on life threats.
So basically keep it as simple as possible if you do need to go this route and try follow known patterns.
So now I just like to elaborate why some of the known patters have a certain structure:
Eventargs: where you create a deepcopy of the objects before passing it. (It is not foolproof because certain references might still be shared . )
Passing results with basic types like int floats, etc, These can be created on a constructor and made immutable.
Atomic key words one these types, or create monitors etc.. Stick to one thread reads the other writes.
Assuming you have complex data you like to work with on two threads simultaneously a completely different ways to solve this , which I have not yet tested:
You could store results in database and let the other executable read it. ( There locks occur on a row level but you can try again or change the SQL code and at least you will get reported deadlocks that can be solved with good design, not just hanging software!!) I would only do this if it actually makes sense to store the data in a database for other reasons.
Another way that helps is to program F# . There objects and all types are immutable by default/ So your objects you want to share should have a constructor and no methods allow the object to get changed or basic types to get incremented.
So you create them and then they don't change! So they are non mutable after that.
Makes locking them and working with them in parallel so much easier. Don't go crazy with this in C# classes because others might follow this "convention' and most things like Lists were just not designed to be immutable in C# ( readonly is not the same as immutable, const is but it is very limiting). Immutable versus readonly

Mutli-threaded synchronization AND UI thread synchronization. Can it be done?

I am making a library that allows access to a system-wide shared resource and would like a mutex-like lock on it. I have used the Mutex class in the past to synchronize operations in different threads or processes.
In UI applications a problem can occur. The library I'm making is used in multiple products, of which some are plugins that sit in the same host UI application. Because of this, the UI thread is the same for each instance of the library - so mutex.WaitOne() will return true even if the resource is already being accessed.
The 'resource' is the user's attention. I don't want more than one specific child window open regardless of which host process wants to open it. Additionally, it may be a different thread that knows when the mutex can be released (child window closed).
Is there a class, or pattern I can apply, that will allow me to easily solve this?
To summarize my intentions, this might be the ideal fictional class:
var specialMutex = new SpecialMutex("UserToastNotification");
specialMutex.WaitOne(0); // Returns true only once, even on the same thread,
// and is respected across different processes.
specialMutex.Release(); // Can be called from threads other than the one
// that called WaitOne();
Yes, Release looks dangerous, but it's only called by the resource.
I think you want a Semaphore that has an initial value of 1. Any call to WaitOne() on a Semaphore tries to decrement the count, regardless of the thread. And any call to Release, regardless of the thread that calls it, results in incrementing the count.
So if a single thread initializes a semaphore with a value of 1 and then calls WaitOne, the count will go to 0. If that same thread calls WaitOne again on the same semaphore, the thread will lock waiting for a release.
Some other thread could come along and call Release to increment the count.
So, whereas a Semaphore isn't exactly like a Mutex, it might be similar enough to let your program work.
You could use a compare/exchange operation to accomplish this. Something like this:
class Lock {
int locked = 0;
bool Enter() { return Interlocked.CompareExchange(ref locked, 1, 0) == 0; }
void Leave() { Interlocked.CompareExchange(ref locked, 0, 1); }
}
Here, Enter will only ever return true once, regardless from which thread it is called, untill you call leave.

Best practices for moving objects to a separate thread

We have an implementation for an Ultrasound machine application current where the Ultrasound object is created on the UI's thread. A Singleton implementation would have been good here, but regardless, isn't.
Recently, the set methods changed such that they automatically stop and restart the ultrasound machine, which can take between 10-100ms depending on the state of the machine. For most cases, this isn't too bad of a problem, however it's still causing the UI thread to block for 100ms. Additionally, these methods are not thread-safe and must be called on the same thread where the object was initialized.
This largest issue this is now causing is unresponsive buttons in the UI, especially sliders which may try to update variables many times as you slide the bar. As a result, sliders especially will stutter and update very slowly as it makes many set calls through databound propeties.
What is a good way to create a thread specifically for the creation and work for this Ultrasound object, which will persist through the lifetime of the application?
A current temporary workaround involves spawning a Timer, and invoking a parameter update once we have detected the slider hasn't moved for 200ms, however a Timer would then have to be implemented for every slider and seems like a very messy solution which solves unresponsive sliders, but still blocks the UI thread occasionally.
One thing that's really great about programming the GUI is that you don't have to worry about multiple threads mucking things up for you (assuming you've got CheckForIllegalCrossThreadCalls = true, as you should). It's all single-threaded, operating by means of a message pump (queue) that processes incoming messages one-by-one.
Since you've indicated that you need to synchronize method calls that are not written to be thread-safe (totally understandable), there's no reason you can't implement your own message pump to deal with your Ultrasound object.
A naive, very simplistic version might look something like this (the BlockingCollection<T> class is great if you're on .NET 4.0 or have installed Rx extensions; otherwise, you can just use a plain vanilla Queue<T> and do your own locking). Warning: this is just a quick skeleton I've thrown together just now; I make no promises as to its robustness or even correctness.
class MessagePump<T>
{
// In your case you would set this to your Ultrasound object.
// You could just as easily design this class to be "object-agnostic";
// but I think that coupling an instance to a specific object makes it clearer
// what the purpose of the MessagePump<T> is.
private T _obj;
private BlockingCollection<Action<T>> _workItems;
private Thread _thread;
public MessagePump(T obj)
{
_obj = obj;
// Note: the default underlying data store for a BlockingCollection<T>
// is a FIFO ConcurrentQueue<T>, which is what we want.
_workItems = new BlockingCollection<Action<T>>();
_thread = new Thread(ProcessQueue);
_thread.IsBackground = true;
_thread.Start();
}
public void Submit(Action<T> workItem)
{
_workItems.Add(workItem);
}
private void ProcessQueue()
{
for (;;)
{
Action<T> workItem = _workItems.Take();
try
{
workItem(_obj);
}
catch
{
// Put in some exception handling mechanism so that
// this thread is always running. One idea would be to
// raise an event containing the Exception object on a
// threadpool thread. You definitely don't want to raise
// the event from THIS thread, though, since then you
// could hit ANOTHER exception, which would defeat the
// purpose of this catch block.
}
}
}
}
Then what would happen is: every time you want to interact with your Ultrasound object in some way, you do so through this message pump, by calling Submit and passing in some action that works with your Ultrasound object. The Ultrasound object then receives all messages sent to it synchronously (by which I mean, one at a time), while operating on its own non-GUI thread.
You should maintain a dedicated UltraSound thread, which creates the UltraSound object and then listens for callbacks from other threads.
You should maintain a thread-safe queue of delegates and have the UltraSound thread repeatedly execute and remove the first delegate in the queue.
This way, the UI thread can post actions to the queue, which will then be executed asynchronously by the UltraSound thread.
I'm not sure I fully understand the setup, but here is my attempt at a solution:
How about having the event handler for the slider check the last event time, and wait for 50ms before processing a user adjustment (only process the most recent value).
Then have a thread using a while loop and waiting on an AutoResetEvent trigger from the GUI. It would then create the object and set it?

How to force multiple commands to execute in same threading timeslice?

I have a C# app that needs to do a hot swap of a data input stream to a new handler class without breaking the data stream.
To do this, I have to perform multiple steps in a single thread without any other threads (most of all the data recieving thread) to run in between them due to CPU switching.
This is a simplified version of the situation but it should illustrate the problem.
void SwapInputHandler(Foo oldHandler, Foo newHandler)
{
UnhookProtocol(oldHandler);
HookProtocol(newHandler);
}
These two lines (unhook and hook) must execute in the same cpu slice to prevent any packets from getting through in case another thread executes in between them.
How can I make sure that these two commands run squentially using C# threading methods?
edit
There seems to be some confusion so I will try to be more specific. I didn't mean concurrently as in executing at the same time, just in the same cpu time slice so that no thread executes before these two complete. A lock is not what I'm looking for because that will only prevent THIS CODE from being executed again before the two commands run. I need to prevent ANY THREAD from running before these commands are done. Also, again I say this is a simplified version of my problem so don't try to solve my example, please answer the question.
Performing the operation in a single time slice will not help at all - the operation could just execute on another core or processor in parallel and access the stream while you perform the swap. You will have to use locking to prevent everybody from accessing the stream while it is in an inconsistent state.
Your data receiving thread needs to lock around accessing the handler pointer and you need to lock around changing the handler pointer.
Alternatively if your handler is a single variable you could use Interlocked.Exchange() to swap the value atomically.
Why not go at this from another direction, and let the thread in question handle the swap. Presumably, something wakes up when there's data to be handled, and passes it off to the current Foo. Could you post a notification to that thread that it needs to swap in a new handler the next time it wakes up? That would be much less fraught, I'd think.
Okay - to answer your specific question.
You can enumerate through all the threads in your process and call Thread.Suspend() on each one (except the active one), make the change and then call Thread.Resume().
Assuming your handlers are thread safe, my recommendation is to write a public wrapper over your handlers that does all the locking it needs using a private lock so you can safely change the handlers behind the scenes.
If you do this you can also use a ReaderWriterLockSlim, for accessing the wrapped handlers which allows concurrent read access.
Or you could architect your wrapper class and handler clases in such a way that no locking is required and the handler swamping can be done using a simple interlocked write or compare exchange.
Here's and example:
public interface IHandler
{
void Foo();
void Bar();
}
public class ThreadSafeHandler : IHandler
{
ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim();
IHandler wrappedHandler;
public ThreadSafeHandler(IHandler handler)
{
wrappedHandler = handler;
}
public void Foo()
{
try
{
rwLock.EnterReadLock();
wrappedHandler.Foo();
}
finally
{
rwLock.ExitReadLock();
}
}
public void Bar()
{
try
{
rwLock.EnterReadLock();
wrappedHandler.Foo();
}
finally
{
rwLock.ExitReadLock();
}
}
public void SwapHandler(IHandler newHandler)
{
try
{
rwLock.EnterWriteLock();
UnhookProtocol(wrappedHandler);
HookProtocol(newHandler);
}
finally
{
rwLock.ExitWriteLock();
}
}
}
Take note that this is still not thread safe if atomic operations are required on the handler's methods, then you would need to use higher order locking between treads or add methods on your wrapper class to support thread safe atomic operations (something like, BeginTreadSafeBlock() folowed by EndTreadSafeBlock() that lock the wrapped handler for writing for a series of operations.
You can't and it's logical that you can't. The best you can do is avoid any other thread from disrupting the state between those two actions (as have already been said).
Here is why you can't:
Imagine there was an block that told the operating system to never thread switch while you're on that block. That would be technically possible but will lead to starvation everywhere.
You might thing your threads are the only one being used but that's an unwise assumption. There's the garbage collector, there are the async operations that works with threadpool threads, an external reference, such as a COM object could span its own thread (in your memory space) so that noone could progress while you're at it.
Imagine you make a very long operation in your HookOperation method. It involves a lot of non leaky operations but, as the Garbage Collector can't take over to free your resources, you end up without any memory left. Or imagine you call a COM object that uses multithreading to handle your request... but it can't start the new threads (well it can start them but they never get to run) and then joins them waiting for them to finish before coming back... and therefore you join on yourself, never returning!!.
As other posters have already said, you can't enforce system-wide critical section from user-mode code. However, you don't need it to implement the hot swapping.
Here is how.
Implement a proxy with the same interface as your hot-swappable Foo object. The proxy shall call HookProtocol and never unhook (until your app is stopped). It shall contain a reference to the current Foo handler, which you can replace with a new instance when needed. The proxy shall direct the data it receives from hooked functions to the current handler. Also, it shall provide a method for atomic replacement of the current Foo handler instance (there is a number of ways to implement it, from simple mutex to lock-free).

How can a child thread notify a parent thread of its status/progress?

I have a service responsible for many tasks, one of which is to launch jobs (one at a time) on a separate thread (threadJob child), these jobs can take a fair amount of time and
have various phases to them which I need to report back.
Ever so often a calling application requests the status from the service (GetStatus), this means that somehow the service needs to know at what point the job (child thread) is
at, my hope was that at some milestones the child thread could somehow inform (SetStatus) the parent thread (service) of its status and the service could return that information
to the calling application.
For example - I was looking to do something like this:
class Service
{
private Thread threadJob;
private int JOB_STATUS;
public Service()
{
JOB_STATUS = "IDLE";
}
public void RunTask()
{
threadJob = new Thread(new ThreadStart(PerformWork));
threadJob.IsBackground = true;
threadJob.Start();
}
public void PerformWork()
{
SetStatus("STARTING");
// do some work //
SetStatus("PHASE I");
// do some work //
SetStatus("PHASE II");
// do some work //
SetStatus("PHASE III");
// do some work //
SetStatus("FINISHED");
}
private void SetStatus(int status)
{
JOB_STATUS = status;
}
public string GetStatus()
{
return JOB_STATUS;
}
};
So, when a job needs to be performed RunTask() is called and this launches the thread (threadJob). This will run and perform some steps (using SetStatus to set the new status at
various points) and finally finish. Now, there is also function GetStatus() which should return the STATUS whenever requested (from a calling application using IPC) - this status
should reflect the current status of the job running by threadJob.
So, my problem is simple enough...
How can threadJob (or more specifically PerformWork()) return to Service the change in status in a thread-safe manner (I assume my example above of SetStatus/GetStatus is
unsafe)? Do I need to use events? I assume I cannot simply change JOB_STATUS directly ... Should I use a LOCK (if so on what?)...
You may have already looked into this, but the BackgroundWorker class gives you a nice interface for running tasks on background threads, and provides events to hook into for notifications that progress has changed.
You could create an event in the Service class and then invoke it in a thread-safe manner. Pay very close attention to how I have implemented the SetStatus method.
class Service
{
public delegate void JobStatusChangeHandler(string status);
// Event add/remove auto implemented code is already thread-safe.
public event JobStatusChangeHandler JobStatusChange;
public void PerformWork()
{
SetStatus("STARTING");
// stuff
SetStatus("FINISHED");
}
private void SetStatus(string status)
{
JobStatusChangeHandler snapshot;
lock (this)
{
// Get a snapshot of the invocation list for the event handler.
snapshot = JobStatusChange;
}
// This is threadsafe because multicast delegates are immutable.
// If you did not extract the invocation list into a local variable then
// the event may have all delegates removed after the check for null which
// which would result in a NullReferenceException when you attempt to invoke
// it.
if (snapshot != null)
{
snapshot(status);
}
}
}
I'd have the child thread raise a 'statusupdate' event, passing a struct with the information necessary for the parent and have the parent subscribe to it when launching it.
You can use the Event-Based Async Pattern.
I would go with delegate/event from the thread to the caller. If caller was UI or somewhere on that lines, I would be nice to the message pump and use appropriate Invoke()s to serialize notifications with the UIs thread when required.
I once wrote an app that needed a marker showing the progress a thread was making. I just used a shared global variable between them. The parent would just read the value, and the thread would just update it. No need to synchronize as only the parent read it, and only the child wrote it atomically. As it happened the parent was redrawing things frequently enough anyhow that it didn't even need to be poked by the child when the child updated the variable. Sometimes the simplest possible way works well.
Your current code mixes strings and ints for JOB_STATUS, which can't work. I'm assuming strings here, but it doesn't really matter, as I'll explain.
You current implementation is thread safe in the sense that no memory corruption will occur, since all assignments to reference type fields are guaranteed to be atomic. The CLR demands this, otherwise you could potentially access unmanaged memory if you could somehow access partially updated references. Your processor gives you that atomicity for free, however.
So as long as you're using reference types like strings, you won't get any memory corruption. The same is true for primitives like ints (and smaller) and enums based on them. (Just avoid longs and bigger, and non-primitive value types such as nullable integers.)
But, that is not the end of the story: this implementation is not guaranteed to always represent the current state. The reason for this is that the thread that calls GetStatus might be looking at a stale copy of the JOB_STATUS field, because the assignment in SetState contains no so-called memory barrier. That is: the new value for JOB_STATUS need not be sent to your main RAM right away. There are several reasons why this can be delayed:
Writing to main RAM is inherently slow (relatively speaking), which is the reason your processor has all kinds of buffers and L-something caches in the first place, so the processor usually delays memory synchronization. Not for very long, but it will probably delay. This can be quite noticeable on multicore processors, as these usually have separate caches per core.
The JIT might have stored the value of JOB_STATUS in a register earlier on, as part of some optimization strategy. Again, registers are far more efficient to use than your main RAM. However, this does mean that it might not see changes early enough, as it's still looking at the old copy in the register. (We're not talking minutes here, but still.)
So, if you want to be 100% certain that each thread & processor core is immediately aware of the changed status, declare your field as volatile:
private volatile int JOB_STATUS;
Now, GetStatus/SetStatus, without any locking constructs, is truly thread safe, as volatile demands that the value is read from and written to main RAM immediately (or something 100% equivalent, if the processor can do that more efficiently).
Note that if you don't declare your field as volatile you must use synchronization primitives, such as lock, but generally speaking you need to use the synchronization primitives both Get and Set, otherwise you won't solve the problem that volatile fixes.
Mind you, as you're doing IPC calls to get the status, I'd wager that you won't ever actually be able to observe any difference between non-volatile and volatile, given the overhead of the IPC calls and the thread synchronizations undoubtedly performed behind the scenes.
For more information on volatile, see volatile (C#) on MSDN.

Categories