ManualResetEvent basically says to other threads "you can only proceed when you receive a signal to continue" and is used to pause execution for certain threads until certain condition has been fulfilled. What I want to ask is that why ManualResetEvent when we could easily achieve what we want by using a while loop? Consider the following context:
public class BackgroundService {
ManualResetEvent mre;
public BackgroundService() {
mre = new ManualResetEvent(false);
}
public void Initialize() {
// Initialization
mre.Set();
}
public void Start() {
mre.WaitOne();
// The rest of execution
}
}
is somewhat similar to
public class BackgroundService {
bool hasInitialized;
public BackgroundService() {
}
public void Initialize() {
// Initialization
hasInitialized = true;
}
public void Start() {
while (!hasInitialized)
Thread.Sleep(100);
// The rest of execution
}
}
Is there any particular context where ManualResetEvent is more suitable than a while loop?
Is there any particular context where ManualResetEvent is more suitable than a while loop?
Absolutely. There are two primary reasons: latency and efficiency.
Context-switching the thread to start it running again is relatively expensive, when it's just going to go back to sleep, and the approach you've given will take an average of 50ms to respond to the hasInitialized variable being set - assuming it responds at all. (You don't have any explicit memory barriers, so it's possible that the thread won't actually see a change to the variable at all. I suspect that calling Thread.Sleep effectively adds a memory barrier, but it's not guaranteed.) With OS/CLR-level synchronization primitives, a thread can respond much faster.
Using signals such as that provided ManualResetEvent is more efficient. Using a while loop as you have means after roughly every 100 milliseconds, i.e. 10 times a second, other threads have to stop running so your thread to check the condition has to run, this context switch when the condition is mostly false is less efficient.
However something smells very fishy in your code, why would you have such code polling when something is initialized? If the initialisation is asynchronous there will already be some notification mechanism, e.g. a callback, when it is done so the polling is unnecessary.
Related
I use ManualResetEnvent to pause/continue a thread. The code example is below.
private _rstEvent = new ManualResetEvent(true);
public void DoSomeWork()
{
while(judgementValue)
{
_rstEvent.WaitOne();
...
}
}
public void Pause()
{
_rstEvent.Reset();
}
public void Continue()
{
_rstEvent.Set();
}
The problem is what if the while loop is large, which means every loop in the while statement has many operations to do. The thread will keep going until meet the next _rstEvent.WaitOne();. Is there a way to pause the thread at once except the deprecated suspend?
The problem with Suspend, Abort and the like is that you're trying to do something to another thread which could literally be doing anything at the time.
It has been repeatedly observed that this can lead to difficult to diagnose bugs where e.g. none of your current threads can obtain a lock because the thread that held the lock has been suspended or aborted and thus will never release the lock (substitute in any other resource also).
This is why modern threading mechanisms are built around cooperative primitives where the thread itself checks (at appropriate moments) whether it's being asked to suspend or abort and can ensure that it only does so when not holding any resources.
There's no reason your loop code cannot check the event multiple times per iteration, whenever it is appropriate.
I was reading this article about volatile fields in C#.
using System;
using System.Threading;
class Test
{
public static int result;
public static volatile bool finished;
static void Thread2() {
result = 143;
finished = true;
}
static void Main() {
finished = false;
// Run Thread2() in a new thread
new Thread(new ThreadStart(Thread2)).Start();
// Wait for Thread2 to signal that it has a result by setting
// finished to true.
for (;;) {
if (finished) {
Console.WriteLine("result = {0}", result);
return;
}
}
}
}
As you can see, there's a loop in the main thread that waits for the volatile flag to be set in order to print 'result', which is assigned to 143 before the flag is set. It says in the explanation that if the flag was not declared as volatie then
it would be permissible for the store
to result to be visible to the main
thread after the store to finished
Did I miss something here? Even if it was not volatile, how come the program will ever printout 0.
Volatile prevents (among other things) re-ordering, so without volatile it could as an edge condition conceivably (on some hardware) write them in a different order, allowing the flag to be true even though result is 0 - for a tiny fraction of time. A much more likely scenario, though, is that without volatile the hot loop caches the flag in a register and never exits even though it has been changed.
In reality, this is not a good way to handle concurrency, and in particular a hot loop like that is really actively harmful. In most common cases, a lock or a wait-handle of some kind would be preferred. Or a Task would be ideal if you are up to date in your .NET versions.
I have a rather large class which contains plenty of fields (10+), a huge array (100kb) and some unmanaged resources. Let me explain by example
class ResourceIntensiveClass
{
private object unmaganedResource; //let it be the expensive resource
private byte[] buffer = new byte[1024 * 100]; //let it be the huge managed memory
private Action<ResourceIntensiveClass> OnComplete;
private void DoWork(object state)
{
//do long running task
OnComplete(this); //notify callee that task completed so it can reuse same object for another task
}
public void Start(object dataRequiredForCurrentTask)
{
ThreadPool.QueueUserWorkItem(DoWork); //initiate long running work
}
}
The problem is that the start method never returns after the 10000th iteration causing a stack overflow. I could execute the OnComplete delegate in another thread giving a chance for the Start method to return, but it requires using extra cpu time and resources as you know. So what is the best option for me?
Is there a good reason for doing your calculations recursively? This seems like a simple loop would do the trick, thus obviating the need for incredibly deep stacks. This design seems especially problematic as you are relying on main() to setup your recursion.
recursive methods can get out of hand quite fast. Have you looked into using Parallel Linq?
you could do something like
(your Array).AsParallel().ForAll(item => item.CallMethod());
you could also look into the Task Parallel Library (TPL)
with tasks, you can define an action and a continue with task.
The Reactive Framework (RX) on the other hand could handle these on complete events in an async manner.
Where are you changing the value of taskData so that its length can ever equal currentTaskIndex? Since the tasks you are assigning to the data are never changing, they are being carried out forever...
I would guess that the problem arises from using the pre-increment operator here:
if(c.CurrentCount < 10000)
c.Start(++c.CurrentCount);
I am not sure of the semantics of pre-increment in C#, perhaps the value passed to a method call is not what you expect.
But since your Start(int) method assigns the value of the input to this.CurrentCount as it's first step anyway, you should be safe replacing this with:
if(c.CurrentCount < 10000)
c.Start(c.CurrentCount + 1);
There is no point in assigning to c.CurrentCount twice.
If using the threadpool, I assume you are protecting the counters (c.CurrentCount), otherwise concurrent increments will cause more activity, not just 10000 executions.
There's a neat tool called a ManualResetEvent that could simplify life for you.
Place a ManualResetEvent in your class and add a public OnComplete event.
When you declare your class, you can wire up the OnComplete event to some spot in your code or not wire it up and ignore it.
This would help your custom class to have more correct form.
When your long process is complete (I'm guessing this is in a thread), simply call the Set method of the ManualResetEvent.
As for running your long method, it should be in a thread that uses the ManualResetEvent in a way similar to below:
private void DoWork(object state)
{
ManualResetEvent mre = new ManualResetEvent(false);
Thread thread1 = new Thread(
() => {
//do long running task
mre.Set();
);
thread1.IsBackground = true;
thread1.Name = "Screen Capture";
thread1.Start();
mre.WaitOne();
OnComplete(this); //notify callee that task completed so it can reuse same object for another task
}
I need to design a thread-safe logger. My logger must have a Log() method that simply queues a text to be logged. Also a logger must be lock-free - so that other thread can log messages without locking the logger. I need to design a worker thread that must wait
for some synchronization event and then log all messages from the queue using standard .NET logging (that is not thread-safe). So what i am interested in is synchronization of worker thread - and Log function. Below is a sketch of the class that i designed. I think I must use Monitor.Wait/Pulse here or any other means to suspend and resume worker thread. I don;t want to spend CPU cycles when there is no job for logger.
Let me put it another way - I want to design a logger that will not block a caller threads that use it. I have a high performance system - and that is a requirement.
class MyLogger
{
// This is a lockfree queue - threads can directly enqueue and dequeue
private LockFreeQueue<String> _logQueue;
// worker thread
Thread _workerThread;
bool _IsRunning = true;
// this function is used by other threads to queue log messages
public void Log(String text)
{
_logQueue.Enqueue(text);
}
// this is worker thread function
private void ThreadRoutine()
{
while(IsRunning)
{
// do something here
}
}
}
"lock-free"does not mean that threads won't block each other. It means that they block each other through very efficient but also very tricky mechanisms. Only needed for very high performance scenarios and even the experts get it wrong (a lot).
Best advice: forget "lock-free"and just use a "thread-safe" queue.
I would recommend the "Blocking Queue" from this page.
And it's a matter of choice to include the ThreadRoutine (the Consumer) in the class itself.
To the second part of your question, it depends on what "some synchronization event" exactly is. If you are going to use a Method call, then let that start a one-shot thread. If you want to wait on a Semaphore than don't use Monitor and Pulse. They are not reliable here. Use an AutoResetEvent/ManualResetEvent.
How to surface that depends on how you want to use it.
Your basic ingredients should look like this:
class Logger
{
private AutoResetEvent _waitEvent = new AutoResetEvent(false);
private object _locker = new object();
private bool _isRunning = true;
public void Log(string msg)
{
lock(_locker) { _queue.Enqueue(msg); }
}
public void FlushQueue()
{
_waitEvent.Set();
}
private void WorkerProc(object state)
{
while (_isRunning)
{
_waitEvent.WaitOne();
// process queue,
// ***
while(true)
{
string s = null;
lock(_locker)
{
if (_queue.IsEmpty)
break;
s = _queue.Dequeu();
}
if (s != null)
// process s
}
}
}
}
Part of the discussion seems to be what to do when processing the Queue (marked ***). You can lock the Queue and process all items, during which adding of new entries will be blocked (longer), or lock and retrieve entries one by one and only lock (very) shortly each time. I've adde that last scenario.
A summary: You don't want a Lock-Free solution but a Block-Free one. Block-Free doesn't exist, you will have to settle for something that blocks as little as possible. The last iteration of mys sample (incomplete) show how to only lock around the Enqueue and Dequeue calls. I think that will be fast enough.
Has your profiler shown you that you are experiencing a large overhead by using a simple lock statement? Lock-free programming is very hard to get right, and if you really need it I would suggest taking something existing from a reliable source.
It's not hard to make this lock-free if you have atomic operations. Take a singly linked list; you just need the head pointer.
Log function:
1. Locally prepare the log item (node with logging string).
2. Set the local node's next pointer to head.
3. ATOMIC: Compare head with local node's next, if equal, replace head with address of local node.
4. If the operation failed, repeat from step 2, otherwise, the item is in the "queue".
Worker:
1. Copy head locally.
2. ATOMIC: Compare head with local one, if equal, replace head with NULL.
3. If the operation failed, repeat from step 1.
4. If it succeeded, process the items; which are now local and out of the "queue".
I am working on a small project where I need to make two asynchronous calls right after another.
My code looks something like this:
AsynchronousCall1();
AsynchronousCall2();
The problem I'm having is that both calls take anywhere from one to two seconds to execute and I never know which one will finish last. What I'm looking for is a way to determine who finishes last. If Call1() finishes last, I do one thing. If Call2() finishes last, I do another thing.
This is a simple example of using a lock to ensure that only one thread can enter a piece of code. But it's a general example, which may or may not be best for your application. Add some details to your question to help us find what you're looking for.
void AsynchronousCall1()
{
// do some work
Done("1");
}
void AsynchronousCall2()
{
// do some work
Done("2");
}
readonly object _exclusiveAccess = new object();
volatile bool _alreadyDone = false;
void Done(string who)
{
lock (_exclusiveAccess)
{
if (_alreadyDone)
return;
_alreadyDone = true;
Console.WriteLine(who + " was here first");
}
}
I believe there is a method that is a member of the Thread class to check on a specific thread and determine its status. The other option would be to use a BackgroundWorker instead, as that would allow you to spell out what happens when that thread is finished, by creating seperate methods.
The "thread-unsafe" option would be to use a class variable, and at the end of each thread if it isn't locked / already have the other thread's value, don't set it. Otherwise set it.
Then when in your main method, after the call to wait for all threads to finish, test the class variable.
That will give you your answer as to which thread finished first.
You can do this with two ManualResetEvent objects. The idea is to have the main thread initialize both to unsignaled and then call the asynchronous methods. The main thread then does a WaitAny on both objects. When AsynchronousCall1 completes, it signals one of the objects. When AsynchronousCall2 completes, it signals the other. Here's code:
ManualResetEvent Event1 = new ManualResetEvent(false);
ManualResetEvent Event2 = new ManualResetEvent(false);
void SomeMethod()
{
WaitHandle[] handles = {Event1, Event2};
AsynchronousCall1();
AsynchronousCall2();
int index = WaitHandle.WaitAny(handles);
// if index == 0, then Event1 was signaled.
// if index == 1, then Event2 was signaled.
}
void AsyncProc1()
{
// does its thing and then
Event1.Signal();
}
void AsyncProc2()
{
// does its thing and then
Event2.Signal();
}
There are a couple of caveats here. If both asynchronous methods finish before the call to WaitAny, it will be impossible to say which completed first. Also, if both methods complete very close to one another (i.e. call 1 completes, then call 2 completes before the main thread's wait is released), it's impossible to say which one finished first.
You may want to check out the Blackboard design pattern: http://chat.carleton.ca/~narthorn/project/patterns/BlackboardPattern-display.html. That pattern sets up a common data store and then lets agents (who know nothing about one another -- in this case, your async calls) report their results in that common location. Your blackboard's 'supervisor' would then be aware of which call finished first and could direct your program accordingly.