Cross Process Event - Release all waiters reliably - c#

I have created a cross process event via ManualResetEvent. When this event does occur potentially n threads in n different processes should be unblocked and start running to fetch the new data. The problem is that it seems that ManualResetEvent.Set followed by an immediate Reset does not cause all waiting threads to wake up. The docs are pretty vague there
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682396(v=vs.85).aspx
When the state of a manual-reset event object is signaled, it remains
signaled until it is explicitly reset to nonsignaled by the ResetEvent
function. Any number of waiting threads, or threads that subsequently
begin wait operations for the specified event object, can be released
while the object's state is signaled.
There is a method called PulseEvent which seems to do exactly what I need but unfortunately it is also flawed.
A thread waiting on a synchronization object can be momentarily
removed from the wait state by a kernel-mode APC, and then returned to
the wait state after the APC is complete. If the call to PulseEvent
occurs during the time when the thread has been removed from the wait
state, the thread will not be released because PulseEvent releases
only those threads that are waiting at the moment it is called.
Therefore, PulseEvent is unreliable and should not be used by new
applications. Instead, use condition variables.
Now MS does recommend to use condition variables.
Condition variables are synchronization primitives that enable threads
to wait until a particular condition occurs. Condition variables are
user-mode objects that cannot be shared across processes.
Following the docs I seem to have run out of luck to do it reliably. Is there an easy way to accomplish the same thing without the stated limitations with one ManualResetEvent or do I need to create for each listener process a response event to get an ACK for each subscribed caller? In that case I would need a small shared memory to register the pids of the subscribed processes but that seems to bring in its own set of problems. What does happen when one process crashes or does not respond? ....
To give some context. I have new state to publish which all other processes should read from a shared memory location. It is ok to miss one update when several updates occur at once but the process must read at least the last up to date value. I could poll with a timeout but that seems not like a correct solution.
Currently I am down to
ChangeEvent = new EventWaitHandle(false, EventResetMode.ManualReset, counterName + "_Event");
ChangeEvent.Set();
Thread.Sleep(1); // increase odds to release all waiters
ChangeEvent.Reset();

One general purpose option for handling the case where producers must wake all consumers and the number of consumers is evolving is to use a moving fence approach. This option requires a shared memory IPC region too. The method does sometimes result in consumers being woken when no work is present, especially if lots of processes need scheduling and load is high, but they will always wake except on hopelessly overloaded machines.
Create several manual reset events and have the producers maintain a counter to the next event that will be set. All Events are left set, except the NextToFire event. Consumer processes wait on the NextToFire event. When the producer wishes to wake all consumers it resets the Next+1 event and sets the current event. All consumers will eventually be scheduled and then wait on the new NextToFire event. The effect is that only the producer uses ResetEvent, but consumers always know which event will be next to wake them.
All Users Init: (pseudo code is C/C++, not C#)
// Create Shared Memory and initialise NextToFire;
pSharedMemory = MapMySharedMemory();
if (First to create memory) pSharedMemory->NextToFire = 0;
HANDLE Array[4];
Array[0] = CreateEvent(NULL, 1, 0, "Event1");
Array[1] = CreateEvent(NULL, 1, 0, "Event2");
Array[2] = CreateEvent(NULL, 1, 0, "Event3");
Array[3] = CreateEvent(NULL, 1, 0, "Event4");
Producer to Wake all
long CurrentNdx = pSharedMemory->NextToFire;
long NextNdx = (CurrentNdx+1) & 3;
// Reset next event so consumers block
ResetEvent(Array[NextNdx]);
// Flag to consumers new value
long Actual = InterlockedIncrement(&pSharedMemory->NextToFire) & 3;
// Next line needed if multiple producers active.
// Not a perfect solution
if (Actual != NextNdx) ResetEvent(Actual);
// Now wake them all up
SetEvent(CurrentNdx);
Consumers wait logic
long CurrentNdx = (pSharedMemory->NextToFire) & 3;
WaitForSingleObject(Array[CurrentNdx], Timeout);

Since .NET 4.0, you could use MemoryMappedFile to sync process memory. In this case, write counter to MemoryMappedFile and decrement it from worker processes. If the counter equals to zero, then main process allowed to reset event. Here is the sample code.
Main Process
//number of WorkerProcess
int numWorkerProcess = 5;
//Create MemroyMappedFile object and accessor. 4 means int size.
MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test_mmf", 4);
MemoryMappedViewAccessor accessor = mmf.CreateViewAccessor();
EventWaitHandle ChangeEvent = new EventWaitHandle(false, EventResetMode.ManualReset, counterName + "_Event");
//write counter to MemoryMappedFile
accessor.Write(0, numWorkerProcess);
//.....
ChangeEvent.Set();
//spin wait until all workerProcesses decreament counter
SpinWait.SpinUntil(() => {
int numLeft = accessor.ReadInt32(0);
return (numLeft == 0);
});
ChangeEvent.Reset();
WorkerProcess
//Create existed MemoryMappedfile object which created by main process.
MemoryMappedFile mmf = MemoryMappedFile.OpenExisting("test_mmf");
MemoryMappedViewAccessor accessor = mmf.CreateViewAccessor();
//This mutex object is used for decreament counter.
Mutex mutex = new Mutex(false, "test_mutex");
EventWaitHandle ChangeEvent = new EventWaitHandle(false, EventResetMode.ManualReset, "start_Event");
//....
ChangeEvent.WaitOne();
//some job...
//decrement counter with mutex lock.
mutex.WaitOne();
int count = accessor.ReadInt32(0);
--count;
accessor.Write(0, count);
mutex.ReleaseMutex();
/////////////////////////////////////
If environment is less than .NET 4.0, you could realize by using CreateFileMapping function from win32 API.

You wrote: “PulseEvent which seems to do exactly what I need but unfortunately it is also flawed”. This is true that PulseEvent is flawed, but I cannot agree that manual-reset event is flawed. It is very reliable. There are just cases when you can use manual-reset events, and there are cases where you cannot use them. It is not one-fits-for-all. There are lots of other tools, like auto-reset events, pipes, etc.
The best way to just notify a thread, if you need to notify it periodically, but don't need to send data across processes, is an auto-reset event. You just need own event for each thread. So, you have as many events as there are threads.
If you need to just send data to processes, it’s better to use named pipes. Unlike auto-reset events, you don't need own pipe for each of the processes. Each named pipe has a server and one or more clients. When there are many clients, many instances of the same named pipes are automatically created by the operating system for each of the clients. All instances of a named pipe share the same pipe name, but each instance has its own buffers and handles, and provides a separate conduit for client/server communication. The use of instances enables multiple pipe clients to use the same named pipe simultaneously. Any process can act as both a server for one pipe and a client for another pipe, and vice versa, making peer-to-peer communication possible.
If you will use a named pipe, there would be no need in the events at all in your scenario, and the data will have guaranteed delivery no matter what happens with the processes – each of the processes may get long delays (e.g. by a swap) but the data will be finally delivered ASAP without your special involvement.
One event for all threads (processes) is only OK if the notification will be only once. In this case, you will need the manual-reset event, not an auto-reset one. For example, if you need to notify that your application will very soon exit, you may signal this common manual-reset event. But, as I wrote before, in your scenario, named pipes is the best choice.

Related

C# keep event handling thread alive in CPU friendly way

I would like to run 10 threads in parallel. Each thread contains code that handles serial port communication (Using the 'SerialPort' class). Some of the features are:
Code for handling the event that is raised when the RS232 device returns data.
Code for handling Timer events that are raised when the RS232 device does not return data within a predefined time frame.
As you can see each thread handles some asynchronous events initialized and started from the thread itself. So the thread needs to keep itself alive until all events have been raised and processed. Based on the received data from the RS232 device the thread knows when the work is done and the thread can kill itself.
Now my question: I would like to avoid using an infinite loop to keep the thread alive to avoid using a lot of CPU resources on nothing. Any idea how to do this and also avoiding that the thread blocks/stops itself?.
The most efficient way to keep the 10 threads idle based on a condition is to use a WaitHandle.
The ManualResetEvent class allows you to simply signal when you want to continue execution. You can signal multiple threads with the same handle.
class Work
{
public static void WorkMethod(object stateInfo)
{
Console.WriteLine("Work starting.");
((ManualResetEvent)stateInfo).WaitOne();
Console.WriteLine("Work ending.");
}
}
// Example:
ManualResetEvent manualEvent = new ManualResetEvent(false);
Thread newThread = new Thread(Work.WorkMethod);
newThread.Start(manualEvent);
// This will terminate all threads that are waiting on this handle:
manualEvent.Set();

Getting a slice of idle processing in managed component under unmanaged host

I have a managed component written in C#, which is hosted by a legacy Win32 app as an ActiveX control. Inside my component, I need to be able to get what normally would be Application.Idle event, i.e. obtain a time slice of the idle processing time on the UI thread (it has to be the main UI thread).
However in this hosted scenario, Application.Idle doesn't get fired, because there is no managed message loop (i.e., no Application.Run).
Sadly, the host also doesn't implement IMsoComponentManager, which might be suitable for what I need. And a lengthy nested message loop (with Application.DoEvents) is not an option for many good reasons.
So far, the only solution I can think of is to use plain Win32 timers.
According to this (now perished) MSKB article, WM_TIMER has one of the lowest priorities, followed only by WM_PAINT, which should get me as close to the idle as possible.
Am I missing any other options for this scenario?
Here is a prototype code:
// Do the idle work in the async loop
while (true)
{
token.ThrowIfCancellationRequested();
// yield via a low-priority WM_TIMER message
await TimerYield(DELAY, token); // e.g., DELAY = 50ms
// check if there is a pending user input in Windows message queue
if (Win32.GetQueueStatus(Win32.QS_KEY | Win32.QS_MOUSE) >> 16 != 0)
continue;
// do the next piece of the idle work on the UI thread
// ...
}
// ...
static async Task TimerYield(int delay, CancellationToken token)
{
// All input messages are processed before WM_TIMER and WM_PAINT messages.
// System.Windows.Forms.Timer uses WM_TIMER
// This could be further improved to re-use the timer object
var tcs = new TaskCompletionSource<bool>();
using (var timer = new System.Windows.Forms.Timer())
using (token.Register(() => tcs.TrySetCanceled(), useSynchronizationContext: true))
{
timer.Interval = delay;
timer.Tick += (s, e) => tcs.TrySetResult(true);
timer.Enabled = true;
await tcs.Task;
timer.Enabled = false;
}
}
I don't think Task.Delay would be suitable for this approach, as it uses Kernel timer objects, which are independent of the message loop and its priorities.
Updated, I found one more option: WH_FOREGROUNDIDLE/ForegroundIdleProc. Looks exactly like what I need.
Updated, I also found that a Win32 timer trick is used by WPF for low-priority Dispatcher operations, i.e. Dispatcher.BeginInvoke(DispatcherPriority.Background, ...):
Well, WH_FOREGROUNDIDLE/ForegroundIdleProc hook is great. It behaves in a very similar way to Application.Idle: the hook gets called when the thread's message queue is empty, and the underlying message loop's GetMessage call is about to enter the blocking wait state.
However, I've overlooked one important thing. As it turns, the host app I'm dealing with has its own timers, and its UI thread is pumping WM_TIMER messages constantly and quite frequently. I could have learnt that if I looked at it with Spy++, in the first place.
For ForegroundIdleProc (and for Application.Idle, for that matter), WM_TIMER is no different from any other message. The hook gets called after each new WM_TIMER has been dispatched and the queue has become empty again. That results in ForegroundIdleProc being called much more often than I really need.
Anyway, despite the alien timer messages, the ForegroundIdleProc callback still indicates there is no more user input messages in the thread's queue (i.e., keyboard and mouse are idle). Thus, I can start my idle work upon it and implement some throttling logic using async/await, to kept the UI responsive. This is how it would be different from my initial timer-based approach.

C# multithreading debugging

i am trying to build a multi threaded server that is supposed to spawn new threads for every incoming connection, BUT for all my effort, it spawns new threads only when it feels like it, Can anybody help me debug this code? am i missing something obvious?
while (true)
{
if (txtAddress.Text.Trim() == "Any" || txtAddress.Text.Trim() == "any")
ipEndP = new IPEndPoint(IPAddress.Any, 778);
else
ipEndP = new IPEndPoint(IPAddress.Parse(txtAddress.Text.Trim()), 778);
tcpL = new TcpListener(ipEndP);
tcpL.Server.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
tcpL.Start();
tempSock = tcpL.AcceptSocket();
//t = new Thread(ConnectionHandler); //When new client is connected, new thread //is created to handle the connection
//t.Priority = ThreadPriority.Highest;
//t.Start(tempSock);
ThreadPool.QueueUserWorkItem(ConnectionHandler, tempSock);
}
Check out the MSDN docs for QueueUserWorkItem
Queues a method for execution. The method executes when a thread pool thread becomes available.
Placing a thread in the user work item queue does not guarantee that it will begin executing right away. That's actually a good thing. If you got so many connections that you needed hundreds or thousands of threads, that could easily bring your server to it's knees (and certainly would be very wasteful due to excessive context switches).
Your commented out code should kick off a new thread for every connection. Is that not working? If so, what exactly is not working? Note that creating a new thread for every connection is much more expensive than using the thread pool.
UPDATE
Based on your remark that the commented out code is also failing to create too many threads, I would add...
You are creating WAY too many threads if that happens.
Often I see people asking why they can't create more than around 2000 threads in a process. The reason is not that there is any particular limit inherent in Windows. Rather, the programmer failed to take into account the amount of address space each thread uses.
A thread consists of some memory in kernel mode (kernel stacks and object management), some memory in user mode (the thread environment block, thread-local storage, that sort of thing), plus its stack. (Or stacks if you're on an Itanium system.)
Usually, the limiting factor is the stack size.
http://blogs.msdn.com/b/oldnewthing/archive/2005/07/29/444912.aspx

Limiting the number of threadpool threads

I am using ThreadPool in my application. I have first set the limit of the thread pool by using the following:
ThreadPool.SetMaxThreads(m_iThreadPoolLimit,m_iThreadPoolLimit);
m_Events = new ManualResetEvent(false);
and then I have queued up the jobs using the following
WaitCallback objWcb = new WaitCallback(abc);
ThreadPool.QueueUserWorkItem(objWcb, m_objThreadData);
Here abc is the name of the function that I am calling.
After this I am doing the following so that all my threads come to 1 point and the main thread takes over and continues further
m_Events.WaitOne();
My thread limit is 3. The problem that I am facing is, inspite of the thread pool limit set to 3, my application is processing more than 3 files at the same time, whereas it was supposed to process only 3 files at a time. Please help me solve this issue.
What kind of computer are you using?
From MSDN
You cannot set the number of worker
threads or the number of I/O
completion threads to a number smaller
than the number of processors in the
computer.
If you have 4 cores, then the smallest you can have is 4.
Also note:
If the common language runtime is
hosted, for example by Internet
Information Services (IIS) or SQL
Server, the host can limit or prevent
changes to the thread pool size.
If this is a web site hosted by IIS then you cannot change the thread pool size either.
A better solution involves the use of a Semaphore which can throttle the concurrent access to a resource1. In your case the resource would simply be a block of code that processes work items.
var finished = new CountdownEvent(1); // Used to wait for the completion of all work items.
var throttle = new Semaphore(3, 3); // Used to throttle the processing of work items.
foreach (WorkItem item in workitems)
{
finished.AddCount();
WorkItem capture = item; // Needed to safely capture the loop variable.
ThreadPool.QueueUserWorkItem(
(state) =>
{
throttle.WaitOne();
try
{
ProcessWorkItem(capture);
}
finally
{
throttle.Release();
finished.Signal();
}
}, null);
}
finished.Signal();
finished.Wait();
In the code above WorkItem is a hypothetical class that encapsulates the specific parameters needed to process your tasks.
The Task Parallel Library makes this pattern a lot easier. Just use the Parallel.ForEach method and specify a ParallelOptions.MaxDegreesOfParallelism that throttles the concurrency.
var options = new ParallelOptions();
options.MaxDegreeOfParallelism = 3;
Parallel.ForEach(workitems, options,
(item) =>
{
ProcessWorkItem(item);
});
1I should point out that I do not like blocking ThreadPool threads using a Semaphore or any blocking device. It basically wastes the threads. You might want to rethink your design entirely.
You should use Semaphore object to limit concurent threads.
You say the files are open: are they actually being actively processed, or just left open?
If you're leaving them open: Been there, done that! Relying on connections and resources (it was a DB connection in my case) to close at end of scope should work, but it can take for the dispose / garbage collection to kick in.

How to kill a specific thread from an array of threads

I am creating an array of threads based on the number of records in a database. Each thread then polls an ipaddress and then sleeps for a time and then repolls again. I periodically check the database for any change in the number of hosts. If there are more hosts I start another thread. If there are less hosts I need to kill the specific thread that was monitoring that host. How do i kill the specific thread.
enter code here protected static void GetThreads()
{
Thread[] threads;
do
{
dt = getIP_Poll_status();
threads = new Thread[dt.Rows.Count];
Console.WriteLine(dt.Rows.Count + " Threads");
for (int i = 0; i < threads.Length; ++i)
{
string ip = dt.Rows[i][0].ToString();
int sleep = Convert.ToInt32(dt.Rows[i][1].ToString());
string status = dt.Rows[i][2].ToString();
string host = dt.Rows[i][3].ToString();
Hosts.Add(host);
string port = dt.Rows[i][4].ToString();
//Console.WriteLine("starting on " + ip + " delay " + sleep+".current status "+status);
threads[i] = new Thread(PollingThreadStart);
threads[i].Start(new MyThreadParameters(ip, sleep, status, host, port));
threads[i].Name = host;
}
Thread.Sleep(50000);
}
while (true);
}
Killing threads forcibly is a bad idea. It can leave the system in an indeterminate state.
You should set a flag (in a thread-safe way) so that the thread will terminate itself appropriately next time it checks. See my threading article for more details and sample code.
I would add that using Sleep is almost always the wrong thing to do, by the way. You should use something which allows for a graceful wake-up, such as Monitor.Wait. That way when there are changes (e.g. the polling thread should die) something can wake the thread up if it's waiting, and it can notice the change immediately.
Given that most of your threads will spend the majority of their time doing nothing, your design might be better realised as a single thread that keeps a list of ip addresses and the time they're due to be polled next. Keep it sorted in order of next poll time.
Pseudocode:
What time does the next ip address need to be polled?
Sleep till then
Poll the address.
Update the poll time for that address to now + interval.
Resort the list
Repeat.
Whenever you have a DB update, update the list and then order the thread to re-evaluate when it needs to stop.
You don't specify the language you are targetting, but in general you use the same method regardless of the language. Simple use a shared variable that is used to signal the thread when it is time to stop running. The thread should periodically check the value and if it is set it will stop in a graceful fashion. Typically, this is the only safe method to stop a thread.
I would say:
Don't kill threads. Ask them to die, nicely (vie Events or some shared flag).
Be careful when creating exactly one thread per DB entry. This could mean that some unexpected DB activity where suddenly you have many rows translates into killing the OS with too many threads. Definitely have a limit on the number of threads.
You could send an interrupt signal to the thead you want to end. The thread that has been signalled would need to catch the ThreadInterruptedException that will be raised, and exit gracefully.
There are other, possibly better, ways to achieve what you want, but they are more complicated...
What you are trying to do is a bad idea because of the reasons mentioned above. However you can still give the thread a name which can be the host name. You may find out the thread by checking the name and kill it.

Categories