I read the C# Events and Thread Safety
and the section Thread-safe delegate invocation in MSDN
Before asking quesion, i define thread safety, there are three aspects:
(item1)no bad data R/W. (intermediate data)
(item2)no effect of instruction reoder.
(item3)no effect of cache consistency.
Let's look at this example again:
PropertyChanged?.Invoke(…)
var handler = this.PropertyChanged;
if (handler != null)
{
handler(…);
}
OK, in C# the R/W OP of reference type is no bad data problem. So, when invoke handler, it is never null.
But, i still have questions
Is there a mechanism at the bottom of C# to ensure that an interlocked API operation is actually applied to the PropertyChanged, so there will be no problems with instruction reorder and cache consistency.
If there is indeed a similar interlocked mechanism, is it only for delegate and event types? Is there such a guarantee for other variables type that can use .? operator.
【Additional】
Yes,i cannot define the meaning of thread safety.I only want to give a NAME to item1-item3. And my the other doubt comes from the following field-like events are implemented using Interlocked.CompareExchange
What is this thing you call thread-safe?
The code we’ve got so far is
“thread-safe” in that it doesn’t matter what other threads do – you
won’t get a NullReferenceException from the above code. However, if
other threads are subscribing to the event or unsubscribing from it,
you might not see the most recent changes for the normal reasons of
memory models being complicated.
As of C# 4, field-like events are implemented using
Interlocked.CompareExchange, so we can just use a corresponding
Interlocked.CompareExchange call to make sure we get the most recent
value. There’s nothing new about being able to do that, admittedly,
but it does mean we can just write
CLEAN EVENT HANDLER INVOCATION WITH C# 6
You need to understand one thing about thread safety. Well, a couple of:
It is only given when documented. The default is that NO api is thread safe.
It comes with a significant cost. Any locking has a cost - so it should be avoided if unnecessary.
Finally, especially whe nyou talk about UI elements, there are very specific threading rules in the framework - going down to rules in windows. STA - single threaded apartment, one thread only. UI thread only.
So, no, there is no magic mechanism that guarantees something that IS NOT GUARANTEED PER DOCUMENTATION because it would mean the cost would have to be paid every time, mostly also when not needed.
Event mechanisms in .NET are single threaded. Period. Live with it. They go back to a notification mechanism in the UI, and there, for rules likely older than you (i.e. they go back to times of ActiveX UI elements, which incidentally still exist in i.e. the standard file dialog) this is the domain of "anything in the UI is only ever changed by the ONE ui thread".
Related
Suppose I want to use a non thread-safe class from the .Net Framework (the documentation states that it is not thread-safe). Sometimes I change the value of Property X from one thread, and sometimes from another thread, but I never access it from two threads at the same time. And sometimes I call Method Y from one thread, and sometimes from another thread, but never at the same time.
Is this means that I use the class in a thread-safe way, and the fact that the documentation state that it's not thread-safe
is no longer relevant to my situation?
If the answer is No: Can I do everything related to a specific object in the same thread - i.e, creating it and calling its members always in the same thread (but not the GUI thread)? If so, how do I do that? (If relevant, it's a WPF app).
No, it is not thread safe. As a general rule, you should never write multi threaded code without some kind of synchronization. In your first example, even if you somehow manage to ensure that modifying/reading is never done at the same time, still there is a problem of caching values and instructions reordering.
Just for example, CPU caches values into a register, you update it on one thread, read it from another. If the second one has it cached, it doesn't go to RAM to fetch it and doesn't see the updated value.
Take a look at this great post for more info and problems with writing lock free multi threaded code link. It has a great explanation how CPU, compiler and CLI byte code compiler can reorder instructions.
Suppose I want to use a non thread-safe class from the .Net Framework (the documentation states that it is not thread-safe).
"Thread-safe" has a number of different meanings. Most objects fall into one of three categories:
Thread-affine. These objects can only be accessed from a single thread, never from another thread. Most UI components fall into this category.
Thread-safe. These objects can be accessed from any thread at any time. Most synchronization objects (including concurrent collections) fall into this category.
One-at-a-time. These objects can be accessed from one thread at a time. This is the "default" category, with most .NET types falling into this category.
Sometimes I change the value of Property X from one thread, and sometimes from another thread, but I never access it from two threads at the same time. And sometimes I call Method Y from one thread, and sometimes from another thread, but never at the same time.
As another answerer noted, you have to take into consideration instruction reordering and cached reads. In other words, it's not sufficient to just do these at different times; you'll need to implement proper barriers to ensure it is guaranteed to work correctly.
The easiest way to do this is to protect all access of the object with a lock statement. If all reads, writes, and method calls are all within the same lock, then this would work (assuming the object does have a one-at-a-time kind of threading model and not thread-affine).
Suppose I want to use a non thread-safe class from the .Net Framework (the documentation states that it is not thread-safe). Sometimes I change the value of Property X from one thread, and sometimes from another thread, but I never access it from two threads at the same time. And sometimes I call Method Y from one thread, and sometimes from another thread, but never at the same time.
All Classes are by default non thread safe, except few Collections like Concurrent Collections designed specifically for the thread safety. So for any other class that you may choose and if you access it via multiple threads or in a Non atomic manner, whether read / write then it's imperative to introduce thread safety while changing the state of an object. This only applies to the objects whose state can be modified in a multi-threaded environment but Methods as such are just functional implementation, they are themselves not a state, which can be modified, they just introduce thread safety for maintaining the object state.
Is this means that I use the class in a thread-safe way, and the fact that the documentation state that it's not thread-safe is no longer relevant to my situation? If the answer is No: Can I do everything related to a class in the same thread (but not the GUI thread)? If so, how do I do that? (If relevant, it's a WPF app).
For a Ui application, consider introducing Async-Await for IO based operations, like file read, database read and use TPL for compute bound operations. Benefit of Async-Await is that:
It doesn't block the Ui thread at all, and keeps Ui completely responsive, in fact post await Ui controls can be directly updated with no Cross thread concern, since only one thread is involved
The TPL concurrency too makes compute operations blocking, they summon the threads from the thread Pool and can't be used for the Ui update due to Cross thread concern
And last: there are classes in which one method starts an operation, and another one ends it. For example, using the SpeechRecognitionEngine class you can start a speech recognition session with RecognizeAsync (this method was before the TPL library so it does not return a Task), and then cancel the recognition session with RecognizeAsyncCancel. What if I call RecognizeAsync from one thread and RecognizeAsyncCancel from another one? (It works, but is it "safe"? Will it fail on some conditions which I'm not aware of?)
As you have mentioned the Async method, this might be an older implementation, based on APM, which needs AsyncCallBack to coordinate, something on the lines of BeginXX, EndXX, if that's the case, then nothing much would be required to co-ordinate, as they use AsyncCallBack to execute a callback delegate. In fact as mentioned earlier, there's no extra thread involved here, whether its old version or new Async-Await. Regarding task cancellation, CancellationTokenSource can be used for the Async-Await, a separate cancellation task is not required. Between multiple threads coordination can be done via Auto / Manual ResetEvent.
If the calls mentioned above are synchronous, then use the Task wrapper to return the Task can call them via Async method as follows:
await Task.Run(() => RecognizeAsync())
Though its a sort of Anti-Pattern, but can be useful in making whole call chain Async
Edits (to answer OP questions)
Thanks for your detailed answer, but I didn't understand some of it. At the first point you are saying that "it's imperative to introduce thread safety", but how?
Thread safety is introduced using synchronization constructs like lock, mutex, semaphore, monitor, Interlocked, all of them serve the purpose of saving an object from getting corrupt / race condition. I don't see any steps.
Does the steps I have taken, as described in my post, are enough?
I don't see any thread safety steps in your post, please highlight which steps you are talking about
At the second point I'm asking how to use an object in the same thread all the time (whenever I use it). Async-Await has nothing to do with this, AFAIK.
Async-Await is the only mechanism in concurrency, which since doesn't involved any extra thread beside calling thread, can ensure everything always runs on same thread, since it use the IO completion ports (hardware based concurrency), otherwise if you use Task Parallel library, then there's no way for you to ensure that same / given thread is always use, as that's a very high level abstraction
Check one of my recent detailed answer on threading here, it may help in providing some more detailed aspects
It is not thread-safe, as the technical risk exists, but your policy is designed to cope with the problem and work around the risk. So, if things stand as you described, then you are not having a thread-safe environment, however, you are safe. For now.
I know that if I am modifying a control from a different thread, I should take care because WinForms and WPF don't allow modifying control's state from other threads.
Why is this restriction in place?
If I can write thread-safe code, I should be able to modify control state safely. Then why is this restriction present?
Several GUI frameworks have this limitation. According to the book Java Concurrency in Practice the reason for this is to avoid complex locking. The problem is that GUI controls may have to react to both events from the UI, data binding and so forth, which leads to locking from several different sources and thus a risk of deadlocks. To avoid this .NET WinForms (and other UIs) restricts access to components to a single thread and thus avoids locking.
In the case of windows, when a control is created UI updates are performed via messages from a message pump. The programmer does not have direct control of the thread the pump is running on, therefore the arrival of a message for a control could possibly result in the changing of the state of the control. If another thread (that the programmer was in direct control of) were allowed to change the state of the control then some sort of synchronization logic would have to be put in place to prevent corruption of the control state. The controls in .Net are not thread safe; this is, I suspect by design. Putting synchronization logic in all controls would be expensive in terms of designing, developing, testing and supporting the code that provides this feature. The programmer could of course provide thread safety to the control for his own code, but not for the code that is in .Net that is running concurrently with his code. One solution to this issue is to restrict these types of actions to one thread and one thread only, which makes the control code in .Net simpler to maintain.
.NET reserves the right to access your control in the thread where you created it at any time. Therefore accesses that come from another thread can never be thread safe.
You might be able to make your own code thread-safe, but there is no way for you to inject the necessary synchronization primitives into the builtin WinForm and WPF code that match up with the ones in your code. Remember, there are a lot of messages getting passed around behind the scenes that eventually cause the UI thread to access the control without you really ever realizing it.
Another interesting aspect of a controls thread affinity is that it could (though I suspect they never would) use the Thread Local Storage pattern. Obviously if you accessed a control on a thread other than the one it was created on it would not be able to access the correct TLS data no matter how carefully you structured the code to guard against all of the normal problems of multithreaded code.
Windows supports many operations which, especially used in combination, are inherently not thread-safe. What should happen, for example, if while one thread is trying to insert some text into a text field starting with the 50th character, while another thread tries to delete the first 40 characters from that field? It would be possible for Windows to use locks to ensure that the second operation couldn't be begun until the first one completed, but using locks would add overhead to every operation, and would also raise the possibility of deadlock if actions on one entity require manipulation of another. Requiring that actions involving a particular window must happen on a particular thread is a more stringent requirement than would be necessary to prevent unsafe combinations of operations from being performed simultaneously, but it's relatively easy to analyze. Using controls from multiple threads and avoiding clashes via some other means would generally be more difficult.
Actually, as far as I know, that WAS the plan from the beginning! Every control could be accessed from any thread! And just because thread locking was needed when another thread required access to the control --and because locking is expensive-- a new threading model was crafted called "thread rental". In that model, related controls would be aggregated into "contexts" using only one thread, thus reducing the amount of locking needed.
Pretty cool, huh?
Unfortunately, that attempt was too bold to succeed (and a bit more complex because locking was still required), so the good old Windows Forms threading model --with the single UI thread and with the creating thread to claim ownership of the control-- is used once again in wPF to make our lives ...easier?
I have a .NET library which, as part of an Object Model will emit notifications of certain occurrences.
It would seem to me that the main pros of events are approachability for beginners (and simplicity in certain consumption contexts) with the main negative being that they are not composable and hence are immediately forced into an Observable.FromEvent* if you want to do anything interesting without writing a code thicket.
The nature of the problem being solved is such that the event traffic won't be particularly frequent or voluminous (it's definitely not screaming RX), but there is definitely no requirement to support .NET versions prior to 4.0 [and hence I can use the built-in IObservable interface in System.Reactive without forcing any significant dependencies on consumers]. I'm interested in some general guidelines some specific concrete reasons to prefer IObservables over events from an API design perspective though - regardless of where my specific case might sit on the event - IObservable spectrum.
So, the question:
Is there anything concrete I'm making dramatically more difficult or problematic for API consumers if I go with the simplest thing and expose an event instead of an IObservable
Or, restated: Aside from the consumer having to do an Observable.FromEvent* to be able to compose events, is there really not a single reason to prefer an IObservable over an event when exposing a notification in an API?
Citations of projects that are using IObservable for not-screaming-RX stuff or coding guidelines would be ideal but are not critical.
NB as touched on in the comments with #Adam Houldsworth, I'm interested in concrete things wrt the API surface of a .NET 4+ library, not a survey of opinions as to which represents a better 'default architecture' for our times :)
NB this question has been touched on in IObserver and IObservable in C# for Observer vs Delegates, Events and IObservable vs Plain Events or Why Should I use IObservable?. The aspect of the question I'm asking has not been addressed in any of the responses due to SRP violations. Another slightly overlapping question is Advantages of .NET Rx over classic events?. Use of IObservable instead of events falls into that same category.
In the comments of this answer, OP refined his question as:
[Is it] indeed definitely the case that each and every event can
always be Adapted to be an IObservable?
To that question, the answer is basically yes - but with some caveats. Also note that the reverse is not true - see the section on reverse transformation and the conclusion for reasons why observables might be preferred to classic events because of the additional meaning they can convey.
For strict translation, all we need to do is map the event - which should include the sender as well as arguments - on to an OnNext invocation. The Observable.FromEventPattern helper method does a good job of this, with the overloads returning IObservable<EventPattern<T>> providing both the sender object and the EventArgs.
Caveats
Recall the Rx grammar. This can be stated in EBNF form as:
Observable Stream = { OnNext }, [ OnError | OnCompleted ] - or 0 or more OnNext events optionally followed by either an OnCompleted or an OnError.
Implicit in this is the idea that from the view of an individual subscriber events do not overlap. To be clear, this means that a subscriber will not be called concurrently. Additionally, it is quite possible that other subscribers can be called not only concurrently but also at different times. Often it is subscribers themselves that control pace of event flow (create back-pressure) by handling events slower than the pace at which they arrive. In this situation typical Rx operators queue against individual subscribers rather than holding up the whole subscriber pool. In contrast, classic .NET event sources will more typically broadcast to subscribers in lock-step, waiting for an event to be fully processed by all subscribers before proceeding. This is the long-standing assumed behaviour for classic events, but it is not actually anywhere decreed.
The C# 5.0 Language Specification (this is a word document, see section 1.6.7.4) and the .NET Framework Design Guidelines : Event Design have surprisingly little to say on the event behaviour. The spec observes that:
The notion of raising an event is precisely equivalent to invoking the
delegate represented by the event—thus, there are no special language
constructs for raising events.
The C# Programming Guide : Events section says that:
When an event has multiple subscribers, the event handlers are invoked
synchronously when an event is raised. To invoke events
asynchronously, see Calling Synchronous Methods Asynchronously.
So classic events are traditionally issued serially by invoking a delegate chain on a single thread, but there is no such restriction in the guidelines - and occasionally we see parallel invocation of delegates - but even here two instances of an event will usually be raised serially even if each one is broadcast in parallel.
There is nothing anywhere I can find in the official specifications that explicitly states that event instances themselves must be raised or received serially. To put it another way, there is nothing that says multiple instances of an event can't be raised concurrently.
This is in contrast to observables where it is explicitly stated in the Rx Design Guidelines that we should:
4.2. Assume observer instances are called in a serialized fashion
Note how this statement only addresses the viewpoint of an individual subscriber instance and says nothing about events being sent concurrently across instances (which in fact is very common in Rx).
So two takeways:
Whilst OnNext captures the idea of an event it is possible that the classic .NET event may violate the Rx Grammar by invoking events concurrently.
It is common for pure Rx Observables to have different semantics around the delivery of events under load because back-pressure is typically handled per subscriber rather than per-event.
As long as you don't raise events concurrently in your API, and you don't care about the back-pressure semantics, then translation to an observable interface via a mechanism like Rx's Observable.FromEvent is going to be just fine.
Reverse Transformation
On the reverse transformation, note that OnError and OnCompleted have no analogue in classic .NET events, so it is not possible to make the reverse mapping without some additional machinery and agreed usage.
For example, one could translate OnError and OnCompleted to additional events - but this is definitely stepping outside of classic event territory. Also, some very awkward synchronization mechanism would be required across the different handlers; in Rx, it is quite possible for one subscriber to receive an OnCompleted whilst another is still receiving OnNext events - it's much harder to pull this off in a classic .NET events transformation.
Errors
Considering behaviour in error cases: It's important to distinguish an error in the event source from one in the handlers/subscribers. OnError is there to deal with the former, in the latter case both classic events and Rx simply (and correctly) blow up. This aspect of errors does translate well in either direction.
Conclusion
.NET classic events and Observables are not isomorphic. You can translate from events to observables reasonably easily as long as you stick to normal usage patterns. It might be the case that your API requires the additional semantics of observables not so easily modelled with .NET events and therefore it makes more sense to go Observable only - but this is a specific consideration rather than a general one, and more of a design issue than a technical one.
As general guidance, I suggest a preference for classic events if possible as these are broadly understood and well supported and can be transformed - but don't hesitate to use observables if you need the extra semantics of source error or completion represented in the elegant form of OnError and OnCompleted events.
I've been reading a lot about Reactive extensions before finally dipping my toe, and after some rough starts I found them really interesting and useful.
Observables extension have this lovely optional parameter where you can pass your own time manager, effectively letting you manipulate time. In my case it helped a lot since I was doing some time related work (check this webservice every ten minutes, send one email per minute, etc) and it made testing a breeze. I would plug the TestScheduler in the component under test and simulate one day of events in a snap.
So if you have some workflows in your library where time plays a role in orchestration, I would really recommend using Observables as your output.
However if you are just raising events in direct responses to user inputs, I don't think that it's worth the added complexity for your users. As you noted they can wrap the event into their own Observable if needed.
You could have your cake and eat it too, although it would mean more work; offer a facade that turns your event library into a Observable-fueled one, by creating said Observable from your events. Or do it the other way: have an optional facade that suscribes to your observable and raise a classic event when triggered.
In my opinion there is a non-trivial technical step to take when dealing with reactive extensions and in this case it may come down to what your API consumers would be the most comfortable using
The IObservable is the IEnumerable of events, so the only question here is do you think IObservable will become the standard as the IEnumerable is now so yes its preferable, if you think its just a passing by thing that will fall in future use event instead.
The IObservable is better than event in most cases but I personally think I'll forget to use it as its not very common to be used and when the time arrives I'll have forgotten about it.
I know my answer is not of great help but only time will tell if RX will become the standard, I think it has good chances.
[EDIT] To make it more Concrete.
To the end user the only difference is that one is an interface an the other is not, making the interface more testable and expandable, because different sources can implement the same interface.
As Adam Houldsworth said one can be changed to the other easily to make no other difference.
To address your headline question:
No they should not be preferred on the basis that they exist in .NET 4 and are available to use. Preference depends on intended use, so a blanket preference is unwarranted.
That said, I would tend towards them as an alternative model to traditional C# events.
As I have commented on throughout this question, there are many ancillary benefits to approaching the API with IObservable, not least of which is external support and the range of choice available to the end consumer.
To address your inner question:
I believe there would be little difficulty between exposing events or IObserable in your API as there is a route to one from the other in both cases. This would put a layer over your API, but in actuality this is a layer you could also release.
It is my opinion that choosing one over the other isn't going to be part of the deciding reason why someone choose to use or not use your API.
To address your re-stated question:
The reason might be found in why there is an Observable.FromEvent in the first place :-) IObservable is gaining support in many places in .NET for reactive programming and forms part of many popular libraries (Rx, Ix, ReactiveUI), and also interoperates well with LINQ and IEnumerable and further into the likes of TPL and TPL DataFlow.
A non-Rx example of the observable pattern, so not specifically IObservable would be ObservableCollection for XAML apps.
I am a novice programmer so I could be completely mistaken here, but this issue bugs me more then it should.
This is actually a follow-up from this question.
The accepted answer was, that you have to call InvokeRequired in order to avoid some overhead, because there is a chance you are already operating on the UI thread.
In theory, I agree that it could save some time. After some tests I found out that using Invoke takes about twice the time compared to calling an operation normally (tests like setting the text of a label n times, or placing a very, very big string in a RichTextBox).
But! Then there is practice.
MSDN documentation says:
This property can be used to determine if you must call an invoke method, which can be useful if you do not know what thread owns a control.
In most cases, you do know when you try to access a control from another thread. Actually the only situation I can think of is, when the control is accessed from a method that can be called by thread X aswell as the owner thread. And that to me is a very unlikely situation.
And even if you genuinely don't know which thread tries to manipulate the control, there is the fact that the UI thread doesn't have to be updated that frequently. Anything between 25-30 fps should be okay for your GUI. And most of the changes made in the UI-controls takes far less then milliseconds to perform.
So if I understand corrrectly, the only scenario where you have to check if an invoke is required is when you don't know which thread is accessing the control and when the GUI update takes more than about 40 ms to finish.
Then there is the answer to this question I asked on http://programmers.stackexchange.com. Which states that you shouldn't be busy with premature optimisation when you don't need it. Especially if it sacrifices code readability.
So this brings me to my question: shouldn't you just use invoke when you know a different thread accesses a control, and only when you know your UI thread can access that piece of code and you find that it should run faster, that you should check if an Invoke is required?
PS: after proofreading my question it really sounds like I am ranting. But actually I am just curious why InvokeRequired is seemingly overused by many more-experienced-than-me programmers.
You are taking things out of context here. The first question you linked linked another question which specifically was about writing a thread-safe method to access a UI control.
If you don't need a thread-safe access to a UI control, because you know you won't update it from another thread, then certainly, you shouldn't employ this technique. Simply update your UI control without using InvokeRequired or Invoke.
On the other hand, if the call will always originate in a thread other than the UI thread, simply use Invoke without first checking for InvokeRequired.
This leads to three simple rules:
If you update the control only from the UI thread, use neither InvokeRequired nor Invoke
If you update the control only from a thread other than the UI thread, use only Invoke.
If you update the control from both the UI thread and other threads, use Invoke in combination with InvokeRequired.
In practice people tend to call the same method from both the foreign and the owning thread. The usual pattern is that the method itself determines whether the thread is the owning thread. If it is, it executes the follow-up code. If it isn't the method calls its own self using Invoke this time.
One benefit of this is that it makes the code more compact, as you have one method related to the operation instead of two.
Another and probably more important benefit is that it reduces the chance that the cross thread exception will be raised. If both methods were available at any time and both threads could choose any of the two, then there would be a chance of a seemingly legitimate method call raising an exception. On the other hand, if there's only one method that adapts to the situation, it provides a safer interface.
Here's an example:
if (control.InvokeRequired)
{
control.BeginInvoke(action, control);
}
else
{
action(control);
}
What if between the condition and the BeginInvoke call the control is disposed, for example?
Another example having to do with events:
var handler = MyEvent;
if (handler != null)
{
handler.BeginInvoke(null, EventArgs.Empty, null, null);
}
If MyEvent is unsubscribed between the first line and the if statement, the if statement will still be executed. However, is that proper design? What if with the unsubscription also comes the destruction of state necessary for the proper invocation of the event? Not only does this solution have more lines of code (boilerplate), but it's not as intuitive and can lead to unexpected results on the client's end.
What say you, SO?
In my opinion, if any of this is an issue, both your thread management and object lifecycle management are reckless and need to be reexamined.
In the first example, the code is not symmetric: BeginInvoke will not wait for action to complete, but the direct call will; this is probably a bug already.
If you expect yet another thread to potentially dispose the control you're working with, you've no choice but to catch the ObjectDisposedException -- and it may not be thrown until you're already inside action, and possibly not on the current thread thanks to BeginInvoke.
It is improper to assume that once you have unsubscribed from an event you will no longer receive notifications on it. It doesn't even require multiple threads for this to happen -- only multiple subscribers. If the first subscriber does something while processing a notification that causes the second subscriber to unsubscribe, the notification currently "in flight" will still go to the second subscriber. You may be able to mitigate this with a guard clause at the top of your event handler routine, but you can't stop it from happening.
There are a few techniques for resolving a race condition:
Wrap the whole thing with a mutex. Make sure that there's a lock that each thread must first acquire before it can even start running in the race. That way, as soon as you get the lock, you know that no other thread is using that resource and you can complete safely.
Find a way to detect and recover from them; This can be very tricky, but some kinds of application work well; A typical way of dealing with this is to keep a counter of the number of times a resource has changed; If you get finished with a task and find that the version number is different from when you started, read the new version and start the task over from the beginning (or just fail)
redesign the application to use only atomic actions; basically this means using operations that can be completed in one step; this often involves "compare-and-swap" operations, or fitting all of the transaction's data into a single disk block that can be written atomically.
redesign the application to use lock-free techniques; This option only makes sense when satisfying hard, real-time constrains is more important than servicing every request, because lock-free designs inherently lose data (usually of some low priority nature).
Which option is "right" depends on the application. Each option has performance trade-offs that may make the benefit of concurrency less appealing.
If this behavior is spreading multiple places in your application, it might deserve to re-design the API, which looks like:
if(!control.invokeIfRequired()){
action(action);
}
Just the same idea as standard JDK library ConcurrentHashMap.putIfAbsent(...). Of course, you need to deal with synchronization inside this new control.invokeIfRequired() method.