Best practices for moving objects to a separate thread - c#

We have an implementation for an Ultrasound machine application current where the Ultrasound object is created on the UI's thread. A Singleton implementation would have been good here, but regardless, isn't.
Recently, the set methods changed such that they automatically stop and restart the ultrasound machine, which can take between 10-100ms depending on the state of the machine. For most cases, this isn't too bad of a problem, however it's still causing the UI thread to block for 100ms. Additionally, these methods are not thread-safe and must be called on the same thread where the object was initialized.
This largest issue this is now causing is unresponsive buttons in the UI, especially sliders which may try to update variables many times as you slide the bar. As a result, sliders especially will stutter and update very slowly as it makes many set calls through databound propeties.
What is a good way to create a thread specifically for the creation and work for this Ultrasound object, which will persist through the lifetime of the application?
A current temporary workaround involves spawning a Timer, and invoking a parameter update once we have detected the slider hasn't moved for 200ms, however a Timer would then have to be implemented for every slider and seems like a very messy solution which solves unresponsive sliders, but still blocks the UI thread occasionally.

One thing that's really great about programming the GUI is that you don't have to worry about multiple threads mucking things up for you (assuming you've got CheckForIllegalCrossThreadCalls = true, as you should). It's all single-threaded, operating by means of a message pump (queue) that processes incoming messages one-by-one.
Since you've indicated that you need to synchronize method calls that are not written to be thread-safe (totally understandable), there's no reason you can't implement your own message pump to deal with your Ultrasound object.
A naive, very simplistic version might look something like this (the BlockingCollection<T> class is great if you're on .NET 4.0 or have installed Rx extensions; otherwise, you can just use a plain vanilla Queue<T> and do your own locking). Warning: this is just a quick skeleton I've thrown together just now; I make no promises as to its robustness or even correctness.
class MessagePump<T>
{
// In your case you would set this to your Ultrasound object.
// You could just as easily design this class to be "object-agnostic";
// but I think that coupling an instance to a specific object makes it clearer
// what the purpose of the MessagePump<T> is.
private T _obj;
private BlockingCollection<Action<T>> _workItems;
private Thread _thread;
public MessagePump(T obj)
{
_obj = obj;
// Note: the default underlying data store for a BlockingCollection<T>
// is a FIFO ConcurrentQueue<T>, which is what we want.
_workItems = new BlockingCollection<Action<T>>();
_thread = new Thread(ProcessQueue);
_thread.IsBackground = true;
_thread.Start();
}
public void Submit(Action<T> workItem)
{
_workItems.Add(workItem);
}
private void ProcessQueue()
{
for (;;)
{
Action<T> workItem = _workItems.Take();
try
{
workItem(_obj);
}
catch
{
// Put in some exception handling mechanism so that
// this thread is always running. One idea would be to
// raise an event containing the Exception object on a
// threadpool thread. You definitely don't want to raise
// the event from THIS thread, though, since then you
// could hit ANOTHER exception, which would defeat the
// purpose of this catch block.
}
}
}
}
Then what would happen is: every time you want to interact with your Ultrasound object in some way, you do so through this message pump, by calling Submit and passing in some action that works with your Ultrasound object. The Ultrasound object then receives all messages sent to it synchronously (by which I mean, one at a time), while operating on its own non-GUI thread.

You should maintain a dedicated UltraSound thread, which creates the UltraSound object and then listens for callbacks from other threads.
You should maintain a thread-safe queue of delegates and have the UltraSound thread repeatedly execute and remove the first delegate in the queue.
This way, the UI thread can post actions to the queue, which will then be executed asynchronously by the UltraSound thread.

I'm not sure I fully understand the setup, but here is my attempt at a solution:
How about having the event handler for the slider check the last event time, and wait for 50ms before processing a user adjustment (only process the most recent value).
Then have a thread using a while loop and waiting on an AutoResetEvent trigger from the GUI. It would then create the object and set it?

Related

Marshalling Events Across Threads

I imagine this may be marked as repetitious and closed, but I cannot for the life of me find a clear, concise answer to this question. All the replies and resources deal almost exclusively with Windows Forms and utilizing pre-built utility classes such as BackgroundWorker. I would very much like to understand this concept at its core, so I can apply the fundamental knowledge to other threading implementations.
A simple example of what I would like to achieve:
//timer running on a seperate thread and raising events at set intervals
//incomplete, but functional, except for the cross-thread event raising
class Timer
{
//how often the Alarm event is raised
float _alarmInterval;
//stopwatch to keep time
Stopwatch _stopwatch;
//this Thread used to repeatedly check for events to raise
Thread _timerThread;
//used to pause the timer
bool _paused;
//used to determine Alarm event raises
float _timeOfLastAlarm = 0;
//this is the event I want to raise on the Main Thread
public event EventHandler Alarm;
//Constructor
public Timer(float alarmInterval)
{
_alarmInterval = alarmInterval;
_stopwatch = new Stopwatch();
_timerThread = new Thread(new ThreadStart(Initiate));
}
//toggles the Timer
//do I need to marshall this data back and forth as well? or is the
//_paused boolean in a shared data pool that both threads can access?
public void Pause()
{
_paused = (!_paused);
}
//little Helper to start the Stopwatch and loop over the Main method
void Initiate()
{
_stopwatch.Start();
while (true) Main();
}
//checks for Alarm events
void Main()
{
if (_paused && _stopwatch.IsRunning) _stopwatch.Stop();
if (!_paused && !_stopwatch.IsRunning) _stopwatch.Start();
if (_stopwatch.Elapsed.TotalSeconds > _timeOfLastAlarm)
{
_timeOfLastAlarm = _stopwatch.Elapsed.Seconds;
RaiseAlarm();
}
}
}
Two questions here; primarily, how do i get the event to the main thread to alert the interested parties of the Alarm event.
Secondarily, regarding the Pause() method, which will be called by an object running on the main thread; can i directly manipulate the Stopwatch that was created on the background thread by calling _stopwatch.start()/_stopwatch.stop(). If not, can the main thread adjust the _paused boolean as illustrated above such that the background thread can then see the new value of _paused and use it?
I swear, I've done my research, but these (fundamental and critical) details have not made themselves clear to me yet.
Disclaimer: I am aware that there are classes available that will provide the exact particular functionality that I am describing in my Timer class. (In fact, I believe the class is called just that, Threading.Timer) However, my question is not an attempt to help me implement the Timer class itself, rather understand how to execute the concepts that drive it.
Note: im writing this here because theres not enough space on comments, this is of course not a complete, nor half a complete answer:
I've always used Events to signal unrelated code to do something, so that was how I described my intent. Forgive me though, I'm not sure I see the difference between marshaling and event versus marshaling another type of data (signal).
Conceptually both can be treated as events. The difference between using provided sync/signalining objects and trying to implement something like this by urself, is who and how gets the job done.
An event in .net is just a delegate, a list of pointers to methods that should be executed when the provider of the event fires it.
What youre talking about (marshalling the event), if i understand you correctly, is sharing the event object when something happens, while the concept of signalig usually talks about an object which is shared to start with, and both threads "know" something happened by checking its state either manualy or automatily (relying on provided tools by both .net and windows).
In the most basic scenario, you can implement such a signaling concept by using a boolean variable, with one thread constantly looping to check if the value of the boolean is true, and another setting to such, as a way to signal something happend. The different signaling tools provided by .NET do this in a less resource-wasting maner, by also not executing the waiting thread, as long as theres no signal (the boolean equals to false), but conceptually, it is the same idea.
You cannot magically execute code on an existing thread.
Instead, you need the existing thread to explicitly execute your code, using a thread-safe data structure to tell it what to do.
This is how Control.Invoke works (which is in turn how BackgroundWorker works).
WiinForms runs a message loop in Application.Run() which looks roughly like this:
while(true) {
var message = GetMessage(); //Windows API call
ProcessMessage(message);
}
Control.Invoke() sends a Windows message (using thread-safe message passing code within Windows) telling it to run your delegate. ProcessMessage (which executes on the UI thread) will catch that message and execute the delegate.
If you want to do this yourself, you will need to write your own loop. You can use the new thread-safe Producer-Consumer collections in .Net 4.0 for this, or you can use a delegate field (with Interlocked.CompareExchange) and an AutoResetEvent and do it yourself.

Best practice for an endless/ periodic execution of code in C#

Often in my code I start threads which basically look like this:
void WatchForSomething()
{
while(true)
{
if(SomeCondition)
{
//Raise Event to handle Condition
OnSomeCondition();
}
Sleep(100);
}
}
just to know if some condition is true or not (for example if a have a bad coded library with no events, just boolean variables and I need a "live-view" of them).
I wonder if there is a better way to accomplish this kind of work like a Windows function to hook in which can run my methods all x sec. Or should I code a global event for my app, raising all x secs and let him call my methods like this:
//Event from Windows or selfmade
TicEvent += new TicEventHandler(WatchForSomething));
and then this method:
void WatchForSomething()
{
if(SomeCondition)
{
//Raise Event to handle Condition
OnSomeCondition();
}
}
So, I hope this is not closed because of being a "subjective question" or something, I just want to know what the best practice for this kind of work is.
There isn't necessarily a "best way" to write long-running event processing code. It depends on what kind of application you are developing.
The first example you show is the idiomatic way in which you would often see the main method of a long-running thread written. While it's generally desirable to use a mutex or waitable event synchronization primitive rather than a call to Sleep() - it is otherwise a typical pattern used to implement event processing loops. The benefit of this approach is that it allows specialized processing to run on a separate thread - allowing your application's main thread to perform other tasks or remain responsive to user input. The downside of this approach is that it may require the use of memory barriers (such as locks) to ensure that shared resources are not corrupted. It also makes it more difficult to update your UI, since you must generally marshal such calls back to the UI thread.
The second approach is often used as well - particularly in systems that already have an event-drive API such as WinForms, WPF, or Silverlight. Using a timer object or Idle event is the typical manner in which periodic background checks can be made if there is no user-initiated event that triggers your processing. The benefit here is that it's easy to interact and update user interface objects (since they are directly accessible from the same thread) and it's mitigates the need for locks and mutexes to protected data. One potential downside of this approach is if the processing that must be performed is time-consuming it can make your application unresponsive to user input.
If you are not writing applications that have a user interface (such as services) then the first form is used much more often.
As an aside ... when possible, it's better to use a synchronization object like an EventWaitHandle or Semaphore to signal when work is available to be processed. This allows you to avoid using Thread.Sleep and/or Timer objects. It reduces the average latency between when work is available to be performed and when event processing code is triggered, and it minimizes the overhead of using background threads, since they can be more efficiently scheduled by the runtime environment and won't consume any CPU cycles until there's work to do.
It's also worth mentioning that if the processing you do is in response to communications with external sources (MessageQueues, HTTP, TCP, etc) you can use technologies like WCF to provide the skeleton of your event handling code. WCF provides base classes that make it substantially easier to implement both Client and Server systems that asynchronously respond to communication event activity.
If you have a look at Reactive Extensions, it provides an elegant way of doing this using the observable pattern.
var timer = Observable.Interval(TimeSpan.FromMilliseconds(100));
timer.Subscribe(tick => OnSomeCondition());
A nice thing about observables is the ability to compose and combine further observables from existing ones, and even use LINQ expressions to create new ones. For example, if you wanted to have a second timer that was in sync with the first, but only triggering every 1 second, you could say
var seconds = from tick in timer where tick % 10 == 0 select tick;
seconds.Subscribe(tick => OnSomeOtherCondition());
By the way, Thread.Sleep is probably never a good idea.
A basic problem with Thread.Sleep that people are usually not aware of, is that the internal implementation of Thread.Sleep does not pump STA messages. The best and easiest alternative, if you have to wait a given time and can't use a kernel sync object, is to replace Thread.Sleep with Thread.Join on the current thread, with the wanted timeout. Thread.Join will behave the same, i.e. the thread would wait the wanted time, but in the meantime STA objects will be pumped.
Why this is important (some detailed explanatiopn follows)?
Sometimes, without you even knowing, one of your threads may have created an STA COM object. (For example this sometimes happens behind the scenes when you use Shell APIs). Now suppose a thread of yours has created an STA COM object, and is now in a call to Thread.Sleep.
If at sometime the COM object has to be deleted (which can happen at an unexpected time by the GC), then the Finalizer thread will try calling the object's distruvtor. This call will be marshalled to the object's STA thread, which will be blocked.
Now, in fact, you will have a blocked Finalizer thread. In this situations objects can't be freed from memory, and bad things will follow.
So the bottom line: Thread.Sleep=bad. Thread.Join=reasonable alternative.
The first example you show is a rather inelegant way to implement a periodic timer. .NET has a number of timer objects that make this kind of thing almost trivial. Look into System.Windows.Forms.Timer, System.Timers.Timer and System.Threading.Timer.
For example, here's how you'd use a System.Threading.Timer to replace your first example:
System.Threading.Timer MyTimer = new System.Threading.Timer(CheckCondition, null, 100, 100);
void CheckCondition(object state)
{
if (SomeCondition())
{
OnSomeCondition();
}
}
That code will call CheckCondition every 100 milliseconds (or thereabouts).
You don't provide a lot of background on why you're doing this, or what you're trying to accomplish, but if its possible, you might want to look into creating a windows service.
Use a BackgroundWoker for additional thread safe measures:
BackgroundWorker bw = new BackgroundWorker();
bw.WorkerSupportsCancellation = true;
bw.WorkerReportsProgress = true;
.
.
.
private void bw_DoWork(object sender, DoWorkEventArgs e)
{
BackgroundWorker worker = sender as BackgroundWorker;
for (;;)
{
if (worker.CancellationPending == true)
{
e.Cancel = true;
break;
}
else
{
// Perform a time consuming operation and report progress.
System.Threading.Thread.Sleep(100);
}
}
}
For more info visit: http://msdn.microsoft.com/en-us/library/cc221403%28v=vs.95%29.aspx
A very simple way for non blocking wait other threads/tasks is:
(new ManualResetEvent(false)).WaitOne(500); //Waits 500ms

Threading, how to instantiate multiple threads without using a class structure

What I have is a loop reading some data and when a set of circumstances are met I need to instantiate a thread. However, the created thread might not complete before the loop criteria is met and I need to create another thread doing the same thing. This is part of an ocr app and I haven't done much thread work before.
while loop
if(criteria)
{
observer = new BackgroundWorker();
observer.DoWork += new DoWorkEventHandler(observer_DoObserving);
observer.RunWorkerAsync();
}
the observer_DoObserving function calls the ocr app, waits for a response and then processes it appropriately and sets observer = null at the end. So how would I create multiple instances of the 'observer' thread. Of course instantly I thought of a class structure, is this an appropriate way to do it or is there another way that is appropriate for threading.
I hope this makes sense.
Thanks, R.
You could use the thread pool, specifically ThreadPool.
while (something)
{
if (criteria)
{
// QueueUserWorkItem also has an overload that allows you to pass data
// that data will then be passed into WorkerMethod when it is called
ThreadPool.QueueUserWorkItem(new WaitCallback(WorkerMethod));
}
}
// ...
private void WorkerMethod(object state)
{
// do work here
}
How you handle this depends in large part on whether the background thread needs to communicate anything to the main thread when it's done. If the background thread really is "fire and forget", then there's no particular reason why you need to maintain a reference to the observer. So you could write:
while loop
{
if(criteria)
{
BackgroundWorker observer = new BackgroundWorker();
observer.DoWork += new DoWorkEventHandler(observer_DoObserving);
observer.RunWorkerAsync();
}
}
The thread does its work and goes away. observer is a local variable that goes out of scope when execution leaves the if block. There's no way that the variable will be overwritten if you have to start another observer thread before the first one is finished.
If you need to keep track of information for individual observers, then you'd create an object of some type (a class that you define) that contains information about the worker's state, and pass that to the RunWorkerAsync method. The worker can then modify that object and send you progress notifications (see the ProgressChanged event and the ReportProgress method), and also report the status when the worker has finished (see RunWorkerCompleted, the state object you passed to RunWorkerAsync will be in the RunWorkerCompletedEventArgs.UserState property).
I am not entirely able to grasp what you are doing exactly, so I may or may not be helpful here;
You seem to be, in part, asking if it's appropriate to create a class to hold some data indicating the state of a thread or what it's working on. That is entirely appropriate to do, provided the object is not an 'expensive' one to create. (no creating Exception objects and throwing them around all the time, for instance).

How to force multiple commands to execute in same threading timeslice?

I have a C# app that needs to do a hot swap of a data input stream to a new handler class without breaking the data stream.
To do this, I have to perform multiple steps in a single thread without any other threads (most of all the data recieving thread) to run in between them due to CPU switching.
This is a simplified version of the situation but it should illustrate the problem.
void SwapInputHandler(Foo oldHandler, Foo newHandler)
{
UnhookProtocol(oldHandler);
HookProtocol(newHandler);
}
These two lines (unhook and hook) must execute in the same cpu slice to prevent any packets from getting through in case another thread executes in between them.
How can I make sure that these two commands run squentially using C# threading methods?
edit
There seems to be some confusion so I will try to be more specific. I didn't mean concurrently as in executing at the same time, just in the same cpu time slice so that no thread executes before these two complete. A lock is not what I'm looking for because that will only prevent THIS CODE from being executed again before the two commands run. I need to prevent ANY THREAD from running before these commands are done. Also, again I say this is a simplified version of my problem so don't try to solve my example, please answer the question.
Performing the operation in a single time slice will not help at all - the operation could just execute on another core or processor in parallel and access the stream while you perform the swap. You will have to use locking to prevent everybody from accessing the stream while it is in an inconsistent state.
Your data receiving thread needs to lock around accessing the handler pointer and you need to lock around changing the handler pointer.
Alternatively if your handler is a single variable you could use Interlocked.Exchange() to swap the value atomically.
Why not go at this from another direction, and let the thread in question handle the swap. Presumably, something wakes up when there's data to be handled, and passes it off to the current Foo. Could you post a notification to that thread that it needs to swap in a new handler the next time it wakes up? That would be much less fraught, I'd think.
Okay - to answer your specific question.
You can enumerate through all the threads in your process and call Thread.Suspend() on each one (except the active one), make the change and then call Thread.Resume().
Assuming your handlers are thread safe, my recommendation is to write a public wrapper over your handlers that does all the locking it needs using a private lock so you can safely change the handlers behind the scenes.
If you do this you can also use a ReaderWriterLockSlim, for accessing the wrapped handlers which allows concurrent read access.
Or you could architect your wrapper class and handler clases in such a way that no locking is required and the handler swamping can be done using a simple interlocked write or compare exchange.
Here's and example:
public interface IHandler
{
void Foo();
void Bar();
}
public class ThreadSafeHandler : IHandler
{
ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim();
IHandler wrappedHandler;
public ThreadSafeHandler(IHandler handler)
{
wrappedHandler = handler;
}
public void Foo()
{
try
{
rwLock.EnterReadLock();
wrappedHandler.Foo();
}
finally
{
rwLock.ExitReadLock();
}
}
public void Bar()
{
try
{
rwLock.EnterReadLock();
wrappedHandler.Foo();
}
finally
{
rwLock.ExitReadLock();
}
}
public void SwapHandler(IHandler newHandler)
{
try
{
rwLock.EnterWriteLock();
UnhookProtocol(wrappedHandler);
HookProtocol(newHandler);
}
finally
{
rwLock.ExitWriteLock();
}
}
}
Take note that this is still not thread safe if atomic operations are required on the handler's methods, then you would need to use higher order locking between treads or add methods on your wrapper class to support thread safe atomic operations (something like, BeginTreadSafeBlock() folowed by EndTreadSafeBlock() that lock the wrapped handler for writing for a series of operations.
You can't and it's logical that you can't. The best you can do is avoid any other thread from disrupting the state between those two actions (as have already been said).
Here is why you can't:
Imagine there was an block that told the operating system to never thread switch while you're on that block. That would be technically possible but will lead to starvation everywhere.
You might thing your threads are the only one being used but that's an unwise assumption. There's the garbage collector, there are the async operations that works with threadpool threads, an external reference, such as a COM object could span its own thread (in your memory space) so that noone could progress while you're at it.
Imagine you make a very long operation in your HookOperation method. It involves a lot of non leaky operations but, as the Garbage Collector can't take over to free your resources, you end up without any memory left. Or imagine you call a COM object that uses multithreading to handle your request... but it can't start the new threads (well it can start them but they never get to run) and then joins them waiting for them to finish before coming back... and therefore you join on yourself, never returning!!.
As other posters have already said, you can't enforce system-wide critical section from user-mode code. However, you don't need it to implement the hot swapping.
Here is how.
Implement a proxy with the same interface as your hot-swappable Foo object. The proxy shall call HookProtocol and never unhook (until your app is stopped). It shall contain a reference to the current Foo handler, which you can replace with a new instance when needed. The proxy shall direct the data it receives from hooked functions to the current handler. Also, it shall provide a method for atomic replacement of the current Foo handler instance (there is a number of ways to implement it, from simple mutex to lock-free).

How can a child thread notify a parent thread of its status/progress?

I have a service responsible for many tasks, one of which is to launch jobs (one at a time) on a separate thread (threadJob child), these jobs can take a fair amount of time and
have various phases to them which I need to report back.
Ever so often a calling application requests the status from the service (GetStatus), this means that somehow the service needs to know at what point the job (child thread) is
at, my hope was that at some milestones the child thread could somehow inform (SetStatus) the parent thread (service) of its status and the service could return that information
to the calling application.
For example - I was looking to do something like this:
class Service
{
private Thread threadJob;
private int JOB_STATUS;
public Service()
{
JOB_STATUS = "IDLE";
}
public void RunTask()
{
threadJob = new Thread(new ThreadStart(PerformWork));
threadJob.IsBackground = true;
threadJob.Start();
}
public void PerformWork()
{
SetStatus("STARTING");
// do some work //
SetStatus("PHASE I");
// do some work //
SetStatus("PHASE II");
// do some work //
SetStatus("PHASE III");
// do some work //
SetStatus("FINISHED");
}
private void SetStatus(int status)
{
JOB_STATUS = status;
}
public string GetStatus()
{
return JOB_STATUS;
}
};
So, when a job needs to be performed RunTask() is called and this launches the thread (threadJob). This will run and perform some steps (using SetStatus to set the new status at
various points) and finally finish. Now, there is also function GetStatus() which should return the STATUS whenever requested (from a calling application using IPC) - this status
should reflect the current status of the job running by threadJob.
So, my problem is simple enough...
How can threadJob (or more specifically PerformWork()) return to Service the change in status in a thread-safe manner (I assume my example above of SetStatus/GetStatus is
unsafe)? Do I need to use events? I assume I cannot simply change JOB_STATUS directly ... Should I use a LOCK (if so on what?)...
You may have already looked into this, but the BackgroundWorker class gives you a nice interface for running tasks on background threads, and provides events to hook into for notifications that progress has changed.
You could create an event in the Service class and then invoke it in a thread-safe manner. Pay very close attention to how I have implemented the SetStatus method.
class Service
{
public delegate void JobStatusChangeHandler(string status);
// Event add/remove auto implemented code is already thread-safe.
public event JobStatusChangeHandler JobStatusChange;
public void PerformWork()
{
SetStatus("STARTING");
// stuff
SetStatus("FINISHED");
}
private void SetStatus(string status)
{
JobStatusChangeHandler snapshot;
lock (this)
{
// Get a snapshot of the invocation list for the event handler.
snapshot = JobStatusChange;
}
// This is threadsafe because multicast delegates are immutable.
// If you did not extract the invocation list into a local variable then
// the event may have all delegates removed after the check for null which
// which would result in a NullReferenceException when you attempt to invoke
// it.
if (snapshot != null)
{
snapshot(status);
}
}
}
I'd have the child thread raise a 'statusupdate' event, passing a struct with the information necessary for the parent and have the parent subscribe to it when launching it.
You can use the Event-Based Async Pattern.
I would go with delegate/event from the thread to the caller. If caller was UI or somewhere on that lines, I would be nice to the message pump and use appropriate Invoke()s to serialize notifications with the UIs thread when required.
I once wrote an app that needed a marker showing the progress a thread was making. I just used a shared global variable between them. The parent would just read the value, and the thread would just update it. No need to synchronize as only the parent read it, and only the child wrote it atomically. As it happened the parent was redrawing things frequently enough anyhow that it didn't even need to be poked by the child when the child updated the variable. Sometimes the simplest possible way works well.
Your current code mixes strings and ints for JOB_STATUS, which can't work. I'm assuming strings here, but it doesn't really matter, as I'll explain.
You current implementation is thread safe in the sense that no memory corruption will occur, since all assignments to reference type fields are guaranteed to be atomic. The CLR demands this, otherwise you could potentially access unmanaged memory if you could somehow access partially updated references. Your processor gives you that atomicity for free, however.
So as long as you're using reference types like strings, you won't get any memory corruption. The same is true for primitives like ints (and smaller) and enums based on them. (Just avoid longs and bigger, and non-primitive value types such as nullable integers.)
But, that is not the end of the story: this implementation is not guaranteed to always represent the current state. The reason for this is that the thread that calls GetStatus might be looking at a stale copy of the JOB_STATUS field, because the assignment in SetState contains no so-called memory barrier. That is: the new value for JOB_STATUS need not be sent to your main RAM right away. There are several reasons why this can be delayed:
Writing to main RAM is inherently slow (relatively speaking), which is the reason your processor has all kinds of buffers and L-something caches in the first place, so the processor usually delays memory synchronization. Not for very long, but it will probably delay. This can be quite noticeable on multicore processors, as these usually have separate caches per core.
The JIT might have stored the value of JOB_STATUS in a register earlier on, as part of some optimization strategy. Again, registers are far more efficient to use than your main RAM. However, this does mean that it might not see changes early enough, as it's still looking at the old copy in the register. (We're not talking minutes here, but still.)
So, if you want to be 100% certain that each thread & processor core is immediately aware of the changed status, declare your field as volatile:
private volatile int JOB_STATUS;
Now, GetStatus/SetStatus, without any locking constructs, is truly thread safe, as volatile demands that the value is read from and written to main RAM immediately (or something 100% equivalent, if the processor can do that more efficiently).
Note that if you don't declare your field as volatile you must use synchronization primitives, such as lock, but generally speaking you need to use the synchronization primitives both Get and Set, otherwise you won't solve the problem that volatile fixes.
Mind you, as you're doing IPC calls to get the status, I'd wager that you won't ever actually be able to observe any difference between non-volatile and volatile, given the overhead of the IPC calls and the thread synchronizations undoubtedly performed behind the scenes.
For more information on volatile, see volatile (C#) on MSDN.

Categories