I am running this code and it is using a fair amount of CPU even though it is doing absolutely nothing most of the time.
while (this.IsListening)
{
while (this.RecievedMessageBuffer.Count > 0)
{
lock (this.RecievedMessageBuffer)
{
this.RecievedMessageBuffer[0].Reconstruct();
this.RecievedMessageBuffer[0].HandleMessage(messageHandler);
this.RecievedMessageBuffer.RemoveAt(0);
}
}
}
What is the best way to block until a condition is met?
Use a WaitHandle.
WaitHandle waitHandle = new AutoResetEvent();
// In your thread.
waitHandle.WaitOne();
// In another thread signal that the condition is met.
waitHandle.Set();
You could also consider changing the interface of your class to raise an event when there is new data to be read. Then you can put your code inside the event handler.
Assuming you are using .NET 4, I'd suggest switching RecievedMessageBuffer to be a BlockingCollection. When you are putting messages into it, call it's Add method. When you want to retrieve a message, call it's Take or TryTake methods. Take will block the reading thread until a message is available, without burning CPU like your original example.
// Somewhere else
BlockingCollection<SomethingLikeAMessage> RecievedMessageBuffer = new BlockingCollection<SomethingLikeAMessage>();
// Something like this where your example was
while (this.IsListening)
{
SomethingLikeAMessage message;
if (RecievedMessageBuffer.TryTake(out message, 5000);
{
message.Reconstruct();
message.HandleMessage(messageHandler);
}
}
Above lines of code and specifically AutoResetEvent is available in version 3.5. So simple code like above with some minor correction is very effective because it works and close to foundation API. The correction should be
AutoResetEvent waitHandle = new AutoResetEvent(false);
Constructor with argument false makes WaitOne() to wait because AutoResetEven is not reset (false). There is not much advantage of using interface WaitHandle, so I would just use AutoResetEvent instead as it exposes method Set and WaitOne is quite verbose in this case. Most importantly, the constructor argument and should be false.
Related
I have the below codes in C# for consumer and producer using AutoResetEvent, but they do not work in case having multiple producer and one consumer. The problem is that the consumer can not consume all the items in the queue. When I debug, I notice the consumer can only remove one item and then it returns false and can not remove anymore. Seems the problem is in AutoResetEvent, but I can not figure out what is wrong.
private AutoResetEvent newItemSignal = new AutoResetEvent(false);
private Queue<Task> iQueue = new Queue<Task>();
public void Enqueue(Task task)
{
lock (((ICollection)iQueue).SyncRoot)
{
iQueue.Enqueue(task);
newItemSignal.Set();
}
}
public bool Dequeue(out Task task, int timeout)
{
if (newItemSignal.WaitOne(timeout, false))
{
lock (((ICollection)iQueue).SyncRoot)
{
task = iQueue.Dequeue();
}
return true;
}
task = default(Task);
return false;
}
The problem with using an AutoResetEvent like this is that you may call Set() twice or more but WaitOne() only once. Calling Set() on an ARE that is already signaled will always fail, the item gets stuck in the queue. A standard threading race bug. Looks like you could fix it by emptying the entire queue in the consumer. Not a real fix, the producer can still race ahead of the consumer, you merely lowered the odds to the once-a-month undebuggable stage.
ARE cannot do this, it cannot count. Use a Semaphore/Slim instead, it was made to count in a thread-safe way. Or use a ConcurrentQueue, a class added to solve exactly this kind of programming problem.
By using AutoResetEvent, you have designed the program in such a way that only one consumer can consume an item at a time.
If you want to stick on with the similar design, you can instead use ManualResetEvent, Reset the event when any of the consumer thread finds that there are no items to be consumed, and Set the event when the producer thread knows that there is atleast one item to be consumed.
You can find an alternate design with Monitor class here
You can also can make use of Blocking collection if you are using .NET 4.0 or higher
One of the things I'm having a hard time to understand in multi-threaded programming is that fact that when one thread reaches a line that calls WaitOne(), how do I know which other threads are involved? Where or how can I find (or understand) how the WaitHandle receives the signal? For example, I'm looking at this code right now:
private void RunSync(object state, ElapsedEventArgs elapsedEventArgs)
{
_mutex.WaitOne();
using (var sync = GWSSync.BuildSynchronizer(_log))
{
try
{
sync.Syncronize();
}
catch(Exception ex)
{
_log.Write(string.Format("Error during synchronization : {0}", ex));
}
}
_mutex.ReleaseMutex();
_syncTimer.Interval = TimeBeforeNextSync().TotalMilliseconds;
_syncTimer.Start();
}
There are a few methods like this in the file (i.e RunThis(), RunThat()). These methods run inside a Windows service and are called when a Timer elapses. Each of these methods are called using different Timers and set up like this:
//Synchro
var timeBeforeFirstSync = TimeBeforeNextSync();
_syncTimer = new System.Timers.Timer(timeBeforeFirstSync.TotalMilliseconds);
_syncTimer.AutoReset = false;
_syncTimer.Elapsed += RunSync;
_syncTimer.Start();
I understand that when the Timer elapses, the RunSync method will run. But when it hits the WaitOne() line, the thread is blocked. But who is it waiting for? Which "other" thread will send the signal?
WaitHandle is an abstraction, as stated in the documentation:
Encapsulates operating system–specific objects that wait for exclusive access to shared resources.
You don't know which other threads are involved, but you do know which other code is involved by checking the usage of the handle (_mutex in your case). Every WaitHandle derived class inherits WaitOne, but what happens after successful wait and how it's get signalled is specific. For instance, in your example _mutex most probably is a Mutex class, so WaitOne acts like "wait until it's free and take ownership" while the ReleaseMutex acts like "release ownership and signal". With that in mind, it should be obvious what all these methods do - ensuring that while RunThis you cannot RunThat and vise versa.
I sometimes encounter code in the following form:
while (true) {
//do something
Thread.Sleep(1000);
}
I was wondering if this is considered good or bad practice and if there are any alternatives.
Usually I "find" such code in the main-function of services.
I recently saw code in the "Run" function in a windows azure worker role which had the following form:
ClassXYZ xyz = new ClassXYZ(); //ClassXYZ creates separate Threads which execute code
while (true) {
Thread.Sleep(1000);
}
I assume there are better ways to prevent a service (or azure worker role) from exiting.
Does anyone have a suggestion for me?
Well when you do that with Thread.Sleep(1000), your processor wastes a tiny amount of time to wake up and do nothing.
You could do something similar with CancelationTokenSource.
When you call WaitOne(), it will wait until it receives a signal.
CancellationTokenSource cancelSource = new CancellationTokenSource();
public override void Run()
{
//do stuff
cancelSource.Token.WaitHandle.WaitOne();
}
public override void OnStop()
{
cancelSource.Cancel();
}
This will keep the Run() method from exiting without wasting your CPU time on busy waiting.
An alternative approach may be using an AutoResetEvent and instantiate it signaled by default.
public class Program
{
public static readonly AutoResetEvent ResetEvent = new AutoResetEvent(true);
public static void Main(string[] args)
{
Task.Factory.StartNew
(
() =>
{
// Imagine sleep is a long task which ends in 10 seconds
Thread.Sleep(10000);
// We release the whole AutoResetEvent
ResetEvent.Set();
}
);
// Once other thread sets the AutoResetEvent, the program ends
ResetEvent.WaitOne();
}
}
Is the so-called while(true) a bad practice?
Well, in fact, a literal true as while loop condition may be considered a bad practice, since it's an unbrekeable loop: I would always use a variable condition which may result in true or false.
When I would use a while loop or something like the AutoResetEvent approach?
When to use while loop...
...when you need to execute code while waiting the program to end.
When to use AutoResetEvent approach...
...when you just need to hold the main thread in order to prevent the program to end, but such main thread just needs to wait until some other thread requests a program exit.
If you see code like this...
while (true)
{
//do something
Thread.Sleep(1000);
}
It's most likely using Sleep() as a means of waiting for some event to occur — something like user input/interaction, a change in the file system (such as a file being created or modified in a folder, network or device event, etc. That would suggest using more appropriate tools:
If the code is waiting for a change in the file system, use a FileSystemWatcher.
If the code is waiting for a thread or process to complete, or a network event to occur, use the appropriate synchronization primitive and WaitOne(), WaitAny() or WaitAll() as appropriate. If you use an overload with a timeout in a loop, it gives you cancelability as well.
But without knowing the actual context, it's rather hard to say categorically that it's either good, bad or indifferent. If you've got a daemon running that has to poll on a regular basis (say an NTP client), a loop like that would make perfect sense (though the daemon would need some logic to monitor for shutdown events occuring.) And even with something like that, you could replace it with a scheduled task: a different, but not necessarily better, design.
If you use while(true) you have no programmatic means of ending the loop from outside the loop.
I'd prefer, at least, a while(mySingletonValue) which would allow us to switch the loop as needed.
An additional approach would be to remove the functional behavior from the looping behavior. Your loop my still be infinite but it calls a function defined elsewhere. Therefore the looping behavior is completely isolated to what is being executed by the loop:
while(GetMySingletonValue())
{
someFunction();
}
In this way your singleton controls the looping behavior entirely.
There are better ways to keep the Azure Service and exit when needed.
Refer:
http://magnusmartensson.com/howto-wait-in-a-workerrole-using-system-timers-timer-and-system-threading-eventwaithandle-over-system-threading-thread-sleep
http://blogs.lessthandot.com/index.php/DesktopDev/MSTech/azure-worker-role-exiting-safely/
It really depends on that //do something on how it determines when to break out of the loop.
In general terms, more appropriate way to do it is to use some synchronization primitive (like ManualResetEvent) to wait on, and the code that processes and triggers the break of the loop (on the other thread) to signal on that primitive. This way you don't have thread wasting resources by being scheduled in every second to do nothing, and is a much cleaner way to do it.
I personally don't like Thread.Sleep code. Because it locks the main thread. You can write something like this, if it is a windows application besides it allows you more flexibility and you can call it async:
bool switchControl = true;
while (switchControl) {
//do something
await Wait(1);
}
async void Wait(int Seconds)
{
DateTime Tthen = DateTime.Now;
do
{
Application.DoEvents(); //Or something else or leave empty;
} while (Tthen.AddSeconds(Seconds) > DateTime.Now);
}
I need to stop a thread until another thread sets a boolean value and I don't want to share between them an event.
What I currently have is the following code using a Sleep (and that's the code I want to change):
while (!_engine.IsReadyToStop())
{
System.Threading.Thread.Sleep(Properties.Settings.Default.IntervalForCheckingEngine);
}
Any ideas?
EDIT TO CLARIFY THINGS:
There is an object called _engine of a class that I don't own. I cannot modify it, that's why I don't want to share an event between them. I need to wait until a method of that class returns true.
SpinWait.SpinUntil is the right answer, regardless where you're gonna place this code. SpinUntil offers "a nice mix of spinning, yielding, and sleeping in between invocations".
If you are using C# 4.0, you can use:
Task t = Task.Factory.StartNew (() => SomeCall(..));
t.Wait();
By using Task.Wait method.
If you have more than one task run one after another, you can use Task.ContinueWith:
Task t = Task.Factory.StartNew (() =>SomeCall(..)).
ContinueWith(ExecuteAfterThisTaskFinishes(...);
t.Wait();
declare as
AutoResetEvent _ReadyToStop = new AutoResetEvent(false);
and use as
_ReadyToStop.WaitOne();
and
_ReadyToStop.Set();
For more info see the Synchronization Primitives in .Net
A condition variable is the synchronization primitive you can use for waiting on a condition.
It does not natively exist in .NET. But the following link provides 100% managed code for a condition variable class implemented in terms of SemaphoreSlim, AutoResetEvent and Monitor classes. It allows thread to wait on a condition. And can wake up one or more threads when condition is satisfied. In addition, it supports timeouts and CancellationTokens.
To wait on a condition you write code similar to the following:
object queueLock = new object();
ConditionVariable notEmptyCondition = new ConditionVariable();
T Take() {
lock(queueLock) {
while(queue.Count == 0) {
// wait for queue to be not empty
notEmptyCondition.Wait(queueLock);
}
T item = queue.Dequeue();
if(queue.Count < 100) {
// notify producer queue not full anymore
notFullCondition.Pulse();
}
return item;
}
}
Then in another thread you can wake up one or more threads waiting on condition.
lock(queueLock) {
//..add item here
notEmptyCondition.Pulse(); // or PulseAll
}
I have been coding with C# for a good little while, but this locking sequence does not make any sense to me. My understanding of locking is that once a lock is obtained with lock(object), the code has to exit the lock scope to unlock the object.
This brings me to the question at hand. I cut out the code below which happens to appear in an animation class in my code. The way the method works is that settings are passed to the method and modified and then passed to a another overloaded method. That other overloaded method will pass all the information to another thread to handle and actually animate the object in some way. When the animation completes, the other thread calls the OnComplete method. This actually all works perfectly, but I do not understand why!
The other thread is able to call OnComplete, obtain a lock on the object and signal to the original thread that it should continue. Should the code not freeze at this point since the object is held in a lock on another thread?
So this is not a need for help in fixing my code, it is a need for clarification on why it works. Any help in understanding is appreciated!
public void tween(string type, object to, JsDictionaryObject properties) {
// Settings class that has a delegate field OnComplete.
Tween.Settings settings = new Tween.Settings();
object wait_object = new object();
settings.OnComplete = () => {
// Why are we able to obtain a lock when the wait_object already has a lock below?
lock(wait_object) {
// Let the waiting thread know it is ok to continue now.
Monitor.Pulse(wait_object);
}
};
// Send settings to other thread and start the animation.
tween(type, null, to, settings);
// Obtain a lock to ensure that the wait object is in synchronous code.
lock(wait_object) {
// Wait here if the script tells us to. Time out with total duration time + one second to ensure that we actually DO progress.
Monitor.Wait(wait_object, settings.Duration + 1000);
}
}
As documented, Monitor.Wait releases the monitor it's called with. So by the time you try to acquire the lock in OnComplete, there won't be another thread holding the lock.
When the monitor is pulsed (or the call times out) it reacquires it before returning.
From the docs:
Releases the lock on an object and blocks the current thread until it reacquires the lock.
I wrote an article about this: Wait and Pulse demystified
There's more going on than meets the eye!
Remember that :
lock(someObj)
{
int uselessDemoCode = 3;
}
Is equivalent to:
Monitor.Enter(someObj);
try
{
int uselessDemoCode = 3;
}
finally
{
Monitor.Exit(someObj);
}
Actually there are variants of this that varies from version to version.
Already, it should be clear that we could mess with this with:
lock(someObj)
{
Monitor.Exit(someObj);
//Don't have the lock here!
Monitor.Enter(someObj);
//Have the lock again!
}
You might wonder why someone would do this, and well, so would I, it's a silly way to make code less clear and less reliable, but it does come into play when you want to use Pulse and Wait, which the version with explicit Enter and Exit calls makes clearer. Personally, I prefer to use them over lock if I'm going to Pulse or Wait for that reason; I find that lock stops making code cleaner and starts making it opaque.
I tend to avoid this style, but, as Jon already said, Monitor.Wait releases the monitor it's called with, so there is no locking at that point.
But the example is slightly flawed IMHO. The problem is, generally, that if Monitor.Pulse gets called before Monitor.Wait, the waiting thread will never be signaled. Having that in mind, the author decided to "play safe" and used an overload which specified a timeout. So, putting aside the unnecessary acquiring and releasing of the lock, the code just doesn't feel right.
To explain this better, consider the following modification:
public static void tween()
{
object wait_object = new object();
Action OnComplete = () =>
{
lock (wait_object)
{
Monitor.Pulse(wait_object);
}
};
// let's say that a background thread
// finished really quickly here
OnComplete();
lock (wait_object)
{
// this will wait for a Pulse indefinitely
Monitor.Wait(wait_object);
}
}
If OnComplete gets called before the lock is acquired in the main thread, and there is no timeout, we will get a deadlock. In your case, Monitor.Wait will simply hang for a while and continue after a timeout, but you get the idea.
That is why I usually recommend a simpler approach:
public static void tween()
{
using (AutoResetEvent evt = new AutoResetEvent(false))
{
Action OnComplete = () => evt.Set();
// let's say that a background thread
// finished really quickly here
OnComplete();
// event is properly set even in this case
evt.WaitOne();
}
}
To quote MSDN:
The Monitor class does not maintain state indicating that the Pulse method has been called. Thus, if you call Pulse when no threads are waiting, the next thread that calls Wait blocks as if Pulse had never been called. If two threads are using Pulse and Wait to interact, this could result in a deadlock.
Contrast this with the behavior of the AutoResetEvent class: If you signal an AutoResetEvent by calling its Set method, and there are no threads waiting, the AutoResetEvent remains in a signaled state until a thread calls WaitOne, WaitAny, or WaitAll. The AutoResetEvent releases that thread and returns to the unsignaled state.