ZeroMQ PUB/SUB Pattern with Multi-Threaded Poller Cancellation - c#

I have two applications, a C++ server, and a C# WPF UI. The C++ code takes requests (from anywhere/anyone) via a ZeroMQ messaging [PUB/SUB] service. I use my C# code for back testing and to create "back tests" and execute them. These back tests can be made up of many "unit tests" and each of these sending/receiving thousands of messages from the C++ server.
Currently individual back tests work well can send off N unit tests each with thousands of requests and captures. My problem is architecture; when I dispatch another back test (following the first) I get a problem with event subscription being done a second time due to the polling thread not being cancelled and disposed. This results in erroneous output. This may seem like a trivial problem (perhaps it is for some of you), but the cancellation of this polling Task under my current configuration is proving troublesome. Some code...
My message broker class is simple and looks like
public class MessageBroker : IMessageBroker<Taurus.FeedMux>, IDisposable
{
private Task pollingTask;
private NetMQContext context;
private PublisherSocket pubSocket;
private CancellationTokenSource source;
private CancellationToken token;
private ManualResetEvent pollerCancelled;
public MessageBroker()
{
this.source = new CancellationTokenSource();
this.token = source.Token;
StartPolling();
context = NetMQContext.Create();
pubSocket = context.CreatePublisherSocket();
pubSocket.Connect(PublisherAddress);
}
public void Dispatch(Taurus.FeedMux message)
{
pubSocket.Send(message.ToByteArray<Taurus.FeedMux>());
}
private void StartPolling()
{
pollerCancelled = new ManualResetEvent(false);
pollingTask = Task.Run(() =>
{
try
{
using (var context = NetMQContext.Create())
using (var subSocket = context.CreateSubscriberSocket())
{
byte[] buffer = null;
subSocket.Options.ReceiveHighWatermark = 1000;
subSocket.Connect(SubscriberAddress);
subSocket.Subscribe(String.Empty);
while (true)
{
buffer = subSocket.Receive();
MessageRecieved.Report(buffer.ToObject<Taurus.FeedMux>());
if (this.token.IsCancellationRequested)
this.token.ThrowIfCancellationRequested();
}
}
}
catch (OperationCanceledException)
{
pollerCancelled.Set();
}
}, this.token);
}
private void CancelPolling()
{
source.Cancel();
pollerCancelled.WaitOne();
pollerCancelled.Close();
}
public IProgress<Taurus.FeedMux> MessageRecieved { get; set; }
public string PublisherAddress { get { return "tcp://127.X.X.X:6500"; } }
public string SubscriberAddress { get { return "tcp://127.X.X.X:6501"; } }
private bool disposed = false;
protected virtual void Dispose(bool disposing)
{
if (!disposed)
{
if (disposing)
{
if (this.pollingTask != null)
{
CancelPolling();
if (this.pollingTask.Status == TaskStatus.RanToCompletion ||
this.pollingTask.Status == TaskStatus.Faulted ||
this.pollingTask.Status == TaskStatus.Canceled)
{
this.pollingTask.Dispose();
this.pollingTask = null;
}
}
if (this.context != null)
{
this.context.Dispose();
this.context = null;
}
if (this.pubSocket != null)
{
this.pubSocket.Dispose();
this.pubSocket = null;
}
if (this.source != null)
{
this.source.Dispose();
this.source = null;
}
}
disposed = true;
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
~MessageBroker()
{
Dispose(false);
}
}
The backtesting "engine" use to execute each back test, first constructs a Dictionary containing each Test (unit test) and the messages to dispatch to the C++ application for each test.
The DispatchTests method, here it is
private void DispatchTests(ConcurrentDictionary<Test, List<Taurus.FeedMux>> feedMuxCollection)
{
broker = new MessageBroker();
broker.MessageRecieved = new Progress<Taurus.FeedMux>(OnMessageRecieved);
testCompleted = new ManualResetEvent(false);
try
{
// Loop through the tests.
foreach (var kvp in feedMuxCollection)
{
testCompleted.Reset();
Test t = kvp.Key;
t.Bets = new List<Taurus.Bet>();
foreach (Taurus.FeedMux mux in kvp.Value)
{
token.ThrowIfCancellationRequested();
broker.Dispatch(mux);
}
broker.Dispatch(new Taurus.FeedMux()
{
type = Taurus.FeedMux.Type.PING,
ping = new Taurus.Ping() { event_id = t.EventID }
});
testCompleted.WaitOne(); // Wait until all messages are received for this test.
}
testCompleted.Close();
}
finally
{
broker.Dispose(); // Dispose the broker.
}
}
The PING message at the end, it to tell the C++ that we are finished. We then force a wait, so that the next [unit] test is not dispatched before all of the returns are received from the C++ code - we do this using a ManualResetEvent.
When the C++ receives the PING message, it sends the message straight back. We handle the received messages via OnMessageRecieved and the PING tells us to set the ManualResetEvent.Set() so that we can continue the unit testing; "Next Please"...
private async void OnMessageRecieved(Taurus.FeedMux mux)
{
string errorMsg = String.Empty;
if (mux.type == Taurus.FeedMux.Type.MSG)
{
// Do stuff.
}
else if (mux.type == Taurus.FeedMux.Type.PING)
{
// Do stuff.
// We are finished reciving messages for this "unit test"
testCompleted.Set();
}
}
My problem is that, broker.Dispose() in the finally above is never hit. I appreciate that finally blocks that are executed on background threads are not guaranteed to get executed.
The crossed out text above was due to me messing about with the code; I was stopping a parent thread before the child had completed. However, there are still problems...
Now broker.Dispose() is called correctly, and broker.Dispose() is called, in this method I attempt to cancell the poller thread and dispose of the Task correctly to avoid any multiple subscriptions.
To cancel the thread I use the CancelPolling() method
private void CancelPolling()
{
source.Cancel();
pollerCancelled.WaitOne(); <- Blocks here waiting for cancellation.
pollerCancelled.Close();
}
but in the StartPolling() method
while (true)
{
buffer = subSocket.Receive();
MessageRecieved.Report(buffer.ToObject<Taurus.FeedMux>());
if (this.token.IsCancellationRequested)
this.token.ThrowIfCancellationRequested();
}
ThrowIfCancellationRequested() is never called and the thread is never cancelled, thus never properly disposed. The poller thread is being blocked by the subSocket.Receive() method.
Now, it is not clear to me how to achieve what I want, I need to invoke the broker.Dispose()/PollerCancel() on a thread other than that used to poll for messages and some how force the cancellation. Thread abort is not what I want to get into at any cost.
Essentially, I want to properly dispose of the broker before executing the next back test, how do I correctly handle this, split out the polling and run it in a separate Application Domain?
I have tried, disposing inside the OnMessageRecived handler, but this is clearly executed on the same thread as the poller and is not the way to do this, without invoking additional threads, it blocks.
What is the best way to achieve what I want and is there a pattern for this sort of case that I can follow?
Thanks for your time.

This is how I eventually got around this [although I am open to a better solution!]
public class FeedMuxMessageBroker : IMessageBroker<Taurus.FeedMux>, IDisposable
{
// Vars.
private NetMQContext context;
private PublisherSocket pubSocket;
private Poller poller;
private CancellationTokenSource source;
private CancellationToken token;
private ManualResetEvent pollerCancelled;
/// <summary>
/// Default ctor.
/// </summary>
public FeedMuxMessageBroker()
{
context = NetMQContext.Create();
pubSocket = context.CreatePublisherSocket();
pubSocket.Connect(PublisherAddress);
pollerCancelled = new ManualResetEvent(false);
source = new CancellationTokenSource();
token = source.Token;
StartPolling();
}
#region Methods.
/// <summary>
/// Send the mux message to listners.
/// </summary>
/// <param name="message">The message to dispatch.</param>
public void Dispatch(Taurus.FeedMux message)
{
pubSocket.Send(message.ToByteArray<Taurus.FeedMux>());
}
/// <summary>
/// Start polling for messages.
/// </summary>
private void StartPolling()
{
Task.Run(() =>
{
using (var subSocket = context.CreateSubscriberSocket())
{
byte[] buffer = null;
subSocket.Options.ReceiveHighWatermark = 1000;
subSocket.Connect(SubscriberAddress);
subSocket.Subscribe(String.Empty);
subSocket.ReceiveReady += (s, a) =>
{
buffer = subSocket.Receive();
if (MessageRecieved != null)
MessageRecieved.Report(buffer.ToObject<Taurus.FeedMux>());
};
// Poll.
poller = new Poller();
poller.AddSocket(subSocket);
poller.PollTillCancelled();
token.ThrowIfCancellationRequested();
}
}, token).ContinueWith(ant =>
{
pollerCancelled.Set();
}, TaskContinuationOptions.OnlyOnCanceled);
}
/// <summary>
/// Cancel polling to allow the broker to be disposed.
/// </summary>
private void CancelPolling()
{
source.Cancel();
poller.Cancel();
pollerCancelled.WaitOne();
pollerCancelled.Close();
}
#endregion // Methods.
#region Properties.
/// <summary>
/// Event that is raised when a message is recived.
/// </summary>
public IProgress<Taurus.FeedMux> MessageRecieved { get; set; }
/// <summary>
/// The address to use for the publisher socket.
/// </summary>
public string PublisherAddress { get { return "tcp://127.0.0.1:6500"; } }
/// <summary>
/// The address to use for the subscriber socket.
/// </summary>
public string SubscriberAddress { get { return "tcp://127.0.0.1:6501"; } }
#endregion // Properties.
#region IDisposable Members.
private bool disposed = false;
/// <summary>
/// Dispose managed resources.
/// </summary>
/// <param name="disposing">Is desposing.</param>
protected virtual void Dispose(bool disposing)
{
if (!disposed)
{
if (disposing)
{
CancelPolling();
if (pubSocket != null)
{
pubSocket.Disconnect(PublisherAddress);
pubSocket.Dispose();
pubSocket = null;
}
if (poller != null)
{
poller.Dispose();
poller = null;
}
if (context != null)
{
context.Terminate();
context.Dispose();
context = null;
}
if (source != null)
{
source.Dispose();
source = null;
}
}
// Shared cleanup logic.
disposed = true;
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
/// <summary>
/// Finalizer.
/// </summary>
~FeedMuxMessageBroker()
{
Dispose(false);
}
#endregion // IDisposable Members.
}
So we poll in the same way, but using the Poller class from NetMQ. In the Task continuation we set so we are sure that both the Poller and Task are cancelled. We are then safe to dispose...

A higher-level view on subject
Your focus and efforts, dedicated to creating a testing framework, signal that your will aims at developing a rigorous and professional-grade approach, which has made me first raise my hat in a salute of admiration to such brave undertaking.
While testing is an important activity for providing a reasonable quantitative evidence, that a System Under Test is meeting defined expectations, the success in this depends on how close the testing environment meets the real-deployment's conditions.
One may agree, that testing on another, different, bases does not prove the real-deployment will run as expected in an environment, that is principally different from the tested one(s).
Element-wise control or just a state-wise control, that's the question.
Your efforts ( at least at the time of OP was posted ) concentrate on code-architecture, that tries to keep instances in-place and tries to re-set an internal state of a Poller instance before a next test-battery starts.
In my view, testing has a few principles to follow, should you strive for professional testing:
Principle of a Test Repeatability ( tests' re-runs shall serve same results, thus avoiding a quasi-testing that provides just a result-"lottery" )
Principle of Non-Intervening Testing ( tests' re-runs shall not be subject of "external"-interference, not controlled by the test scenario )
Having said this, let me bring a few notes inspired by Harry Markowitz, Nobelist awarded for his remarkable quantitative portfolio optimisation studies.
Rather move one step back to get control over elements' full life-cycle
CACI Simulations, Inc., ( one of Harry Markowitz's companies ) developed in the early 90s their flagship software framework COMET III - an exceptionally powerful simulation engine for large, complex design-prototyping and performance-simulations of processes operated in large-scale computing / networking / telco networks.
The greatest impression from COMET III was it's capability to generate testing scenarios including a configurable pre-test "warm-up" pre-load(s), that have made the tested elements get into a state similar to what "fatigue" means in mechanical torture-test experiments or what hydrogen-diffusion fragility means to nuclear power-plant metallurgists.
Yes, once you go into low-level details on how algorithms, node-buffers, memory-allocations, pipe-lined / load-balanced / grid-processing architecture selections, fault-resilience overheads, garbage collection policies and limited resource-sharing algorithms work and impact ( under real-use work-load patterns "pressure" ) end-to-end performance / latencies, this feature is simply indispensable.
This means, that an individual instance-related simple state-wise control is not sufficient, as it does not provide means for either the test-repeatability and test-isolation/non-intervening behaviour. Simply put, even if you find a way to "reset" a Poller instance, this will not get you into realistic testing with guaranteed test-repeatability with pre-test warm-up(s) possible.
A step-back and a higher layer of abstraction and test-scenario controls is needed.
How does this apply to the OP problem?
Instead of just state-control
Create a multi-layer-ed architecture / control-plane(s) / separate signalling
A ZeroMQ way of supporting this goal
Create super-structures as non-trivial patterns
Use full life-cycle controls of instances used inside testing scenarios
Keep ZeroMQ-maxims: Zero-sharing, Zero-blocking, ...
Benefit from Multi-Context()

Related

Create UserControl in non-UI thread Silverlight 5 browser application

I have a Silverlight 5 browser application.
There is a class
public class ActivityControl:UserControl {
public void LoadSubControls() {
//Creates Other UserControls, does calculations and is very slow..No refactoring..
}
}
I need to create multiple instances of this class and call the method LoadSubControls on runtime.
public class BasicContainer:UserControl {
public void CreateMultipleActivityControls() {
for (int i = 0; i < 1000; i++) {
ActivityControl c = new ActivityControl(); ====> I need to call this in a different thread but causes Invalid Cross Thread Exception
c.LoadSubControls();
}
}
}
Is there any way to create multiple UI Threads in order to avoid invalid cross thread exception?
I need multithreading for performance reasons and because the method call is very slow and the UI freezes.
Is there any way to call method SetSyncronizationContext (which is [SecurityCritical]) in Silverlight?
There is no avoiding creating these controls on the UI thread, but you could take advantage of System.Threading.Tasks.Task from Task Parallel Library (TPL) to allow for asynchronous operations.
I've been able to do something like this in silverlight 5 with a structure like this. Got the original idea looking at the source for Caliburn.Micro.
The following is a subset that applies to what you want.
public interface IPlatformProvider {
/// <summary>
/// Executes the action on the UI thread asynchronously.
/// </summary>
/// <param name = "action">The action to execute.</param>
System.Threading.Tasks.Task OnUIThreadAsync(Action action);
}
Here is the implementation.
/// <summary>
/// A <see cref="IPlatformProvider"/> implementation for the XAML platfrom (Silverlight).
/// </summary>
public class XamlPlatformProvider : IPlatformProvider {
private Dispatcher dispatcher;
public XamlPlatformProvider() {
dispatcher = System.Windows.Deployment.Current.Dispatcher;
}
private void validateDispatcher() {
if (dispatcher == null)
throw new InvalidOperationException("Not initialized with dispatcher.");
}
/// <summary>
/// Executes the action on the UI thread asynchronously.
/// </summary>
/// <param name = "action">The action to execute.</param>
public Task OnUIThreadAsync(System.Action action) {
validateDispatcher();
var taskSource = new TaskCompletionSource<object>();
System.Action method = () => {
try {
action();
taskSource.SetResult(null);
} catch (Exception ex) {
taskSource.SetException(ex);
}
};
dispatcher.BeginInvoke(method);
return taskSource.Task;
}
}
You could either go down the constructor DI route to pass in the provider or use a static locator pattern like this.
/// <summary>
/// Access the current <see cref="IPlatformProvider"/>.
/// </summary>
public static class PlatformProvider {
private static IPlatformProvider current = new XamlPlatformProvider();
/// <summary>
/// Gets or sets the current <see cref="IPlatformProvider"/>.
/// </summary>
public static IPlatformProvider Current {
get { return current; }
set { current = value; }
}
}
Now you should be able to make your calls without blocking the main thread and freezing the UI
public class BasicContainer : UserControl {
public async Task CreateMultipleActivityControls() {
var platform = PlatformProvider.Current;
for (var i = 0; i < 1000; i++) {
await platform.OnUIThreadAsync(() => {
var c = new ActivityControl();
c.LoadSubControls();
});
}
}
}
if making multiple calls to the dispatcher caused any performance issues you could instead move the entire process to one acync call.
public class BasicContainer : UserControl {
public async Task CreateMultipleActivityControls() {
var platform = PlatformProvider.Current;
await platform.OnUIThreadAsync(() => {
for (var i = 0; i < 1000; i++) {
var c = new ActivityControl();
c.LoadSubControls();
}
});
}
}

Processing data by multiple threads simultaneously

We have an application that regularly receives multimedia messages, and should reply to them.
We currently do this with a single thread, first receiving messages, and then processing them one by one. This does the job, but is slow.
So we're now thinking of doing the same process but with multiple threads sumultaneously.
Any simple way to allow parallel processing of the incoming records, yet avoid erroneously processing the same record by two threads?
Any simple way to allow parallel processing of the incoming records, yet avoid erroneously processing the same record by two threads?
Yes it is actually not too hard, what you are wanting to do is called the "Producer-Consumer model"
If your message receiver could only handle one thread at a time but your message "processor" can work on multiple messages at once you just need to use a BlockingCollection to store the work that needs to be processed
public sealed class MessageProcessor : IDisposable
{
public MessageProcessor()
: this(-1)
{
}
public MessageProcessor(int maxThreadsForProcessing)
{
_maxThreadsForProcessing = maxThreadsForProcessing;
_messages = new BlockingCollection<Message>();
_cts = new CancellationTokenSource();
_messageProcessorThread = new Thread(ProcessMessages);
_messageProcessorThread.IsBackground = true;
_messageProcessorThread.Name = "Message Processor Thread";
_messageProcessorThread.Start();
}
public int MaxThreadsForProcessing
{
get { return _maxThreadsForProcessing; }
}
private readonly BlockingCollection<Message> _messages;
private readonly CancellationTokenSource _cts;
private readonly Thread _messageProcessorThread;
private bool _disposed = false;
private readonly int _maxThreadsForProcessing;
/// <summary>
/// Add a new message to be queued up and processed in the background.
/// </summary>
public void ReceiveMessage(Message message)
{
_messages.Add(message);
}
/// <summary>
/// Signals the system to stop processing messages.
/// </summary>
/// <param name="finishQueue">Should the queue of messages waiting to be processed be allowed to finish</param>
public void Stop(bool finishQueue)
{
_messages.CompleteAdding();
if(!finishQueue)
_cts.Cancel();
//Wait for the message processor thread to finish it's work.
_messageProcessorThread.Join();
}
/// <summary>
/// The background thread that processes messages in the system
/// </summary>
private void ProcessMessages()
{
try
{
Parallel.ForEach(_messages.GetConsumingEnumerable(),
new ParallelOptions()
{
CancellationToken = _cts.Token,
MaxDegreeOfParallelism = MaxThreadsForProcessing
},
ProcessMessage);
}
catch (OperationCanceledException)
{
//Don't care that it happened, just don't want it to bubble up as a unhandeled exception.
}
}
private void ProcessMessage(Message message, ParallelLoopState loopState)
{
//Here be dragons! (or your code to process a message, your choice :-))
//Use if(_cts.Token.IsCancellationRequested || loopState.ShouldExitCurrentIteration) to test if
// we should quit out of the function early for a graceful shutdown.
}
public void Dispose()
{
if(!_disposed)
{
if(_cts != null && _messages != null && _messageProcessorThread != null)
Stop(true); //This line will block till all queued messages have been processed, if you want it to be quicker you need to call `Stop(false)` before you dispose the object.
if(_cts != null)
_cts.Dispose();
if(_messages != null)
_messages.Dispose();
GC.SuppressFinalize(this);
_disposed = true;
}
}
~MessageProcessor()
{
//Nothing to do, just making FXCop happy.
}
}
I highly recommend you read the free book Patterns for Parallel Programming, it goes in to some detail about this. There is a entire section explaining the Producer-Consumer model in detail.
UPDATE: There are some performance issues with GetConsumingEnumerable() and Parallel.ForEach(, instead use the library ParallelExtensionsExtras and it's new extension method GetConsumingPartitioner()
public static Partitioner<T> GetConsumingPartitioner<T>(
this BlockingCollection<T> collection)
{
return new BlockingCollectionPartitioner<T>(collection);
}
private class BlockingCollectionPartitioner<T> : Partitioner<T>
{
private BlockingCollection<T> _collection;
internal BlockingCollectionPartitioner(
BlockingCollection<T> collection)
{
if (collection == null)
throw new ArgumentNullException("collection");
_collection = collection;
}
public override bool SupportsDynamicPartitions {
get { return true; }
}
public override IList<IEnumerator<T>> GetPartitions(
int partitionCount)
{
if (partitionCount < 1)
throw new ArgumentOutOfRangeException("partitionCount");
var dynamicPartitioner = GetDynamicPartitions();
return Enumerable.Range(0, partitionCount).Select(_ =>
dynamicPartitioner.GetEnumerator()).ToArray();
}
public override IEnumerable<T> GetDynamicPartitions()
{
return _collection.GetConsumingEnumerable();
}
}

In a C# 'using' block, how best to access the IDisposable in contained extension method calls?

I am writing extension methods for a class, and would like to access an IDisposable object defined in a using block which will often contain calls to the extension methods.
I do not want to simply pass the IDisposable to the method calls, which would detract from the simplicity of my API's programming model. Accomplishing what I'm after would also make the code work much more like the third-party API with which I'm integrating.
I can imagine one way to go about this: register the IDisposable in some global location, perhaps tied to the current thread ID so it can be looked up in the extension methods via a factory method call or some such thing. The object could unregister itself when the using block is exited and its Dispose() method is eventually called (to make this work I imagine I might need to use a weak reference, though).
That doesn't seem very unclean, but it is a little too much roundabout for my taste. Is there some more direct way of doing this?
Here's what I'd like to do:
public static class ExtensionMethods {
public static void Foo(this Bar b) {
// Access t to enable this extension method to do its work, whatever that may be
}
}
public class Bar {
}
public class Schlemazel {
public void DoSomething() {
using (Thingamabob t = new Thingamabob()) {
Bar b = new Bar();
b.Foo();
}
}
}
EDIT:
Following is a solution implemented using weak references and a simple thread-based registration system. It seems to work and to be stable even under a fair load, but of course on a really overloaded system it could theoretically start throwing errors due to lock contention.
I thought it might be interesting for someone to see this solution, but again, it introduces needless complexity and I am only willing to do this if necessary. Again, the goal is a clean extension of a third-party API, where I can call extension methods on objects created by the third-party API, where the extension methods depend on some context that is messy to create or get for each little extension method call.
I've left in some console output statements so that if you're curious, you can actually plop these classes into a command-line project and see it all in action.
public class Context : IDisposable
{
private const int MAX_LOCK_TRIES = 3;
private static TimeSpan MAX_WRITE_LOCK_TIMEOUT = TimeSpan.FromTicks(500);
private static System.Threading.ReaderWriterLockSlim readerWriterLock = new System.Threading.ReaderWriterLockSlim();
static IDictionary<string, WeakReference<Context>> threadContexts = new Dictionary<string, WeakReference<Context>>();
private bool registered;
private string threadID;
private string ThreadID
{
get { return threadID; }
set
{
if (threadID != null)
throw new InvalidOperationException("Cannot associate this context with more than one thread");
threadID = value;
}
}
/// <summary>
/// Constructs a Context suitable for use in a using() statement
/// </summary>
/// <returns>A Context which will automatically deregister itself when it goes out of scope, i.e. at the end of a using block</returns>
public static Context CreateContext()
{
Console.WriteLine("CreateContext()");
return new Context(true);
}
private Context(bool register)
{
if (register)
{
registered = true;
try
{
RegisterContext(this);
}
catch
{
registered = false;
}
}
else
registered = false;
}
public Context()
{
registered = false;
}
public void Process(ThirdPartyObject o, params string[] arguments)
{
Console.WriteLine("Context.Process(o)");
// Process o, sometimes using the third-party API which this object has access to
// This hides away the complexity of accessing that API, including obviating the need
// to reconstruct and configure heavyweight objects to access it; calling code can
// blithely call useful methods on individual objects without knowing the messy details
}
public void Dispose()
{
if (registered)
DeregisterContext(this);
}
private static void RegisterContext(Context c)
{
if (c == null)
throw new ArgumentNullException();
c.ThreadID = System.Threading.Thread.CurrentThread.ManagedThreadId.ToString();
Console.WriteLine("RegisterContext() " + c.ThreadID);
bool lockEntered = false;
int tryCount = 0;
try
{
while (!readerWriterLock.TryEnterWriteLock(TimeSpan.FromTicks(5000)))
if (++tryCount > MAX_LOCK_TRIES)
throw new OperationCanceledException("Cannot register context (timeout)");
lockEntered = true;
threadContexts[c.ThreadID] = new WeakReference<Context>(c);
}
finally
{
if (lockEntered)
readerWriterLock.ExitWriteLock();
}
}
private static void DeregisterContext(Context c)
{
if (c == null)
throw new ArgumentNullException();
else if (!c.registered)
return;
Console.WriteLine("DeregisterContext() " + c.ThreadID);
bool lockEntered = false;
int tryCount = 0;
try
{
while (!readerWriterLock.TryEnterWriteLock(TimeSpan.FromTicks(5000)))
if (++tryCount > MAX_LOCK_TRIES)
throw new OperationCanceledException("Cannot deregister context (timeout)");
lockEntered = true;
if (threadContexts.ContainsKey(c.ThreadID))
{
Context registeredContext = null;
if (threadContexts[c.ThreadID].TryGetTarget(out registeredContext))
{
if (registeredContext == c)
{
threadContexts.Remove(c.ThreadID);
}
}
else
threadContexts.Remove(c.ThreadID);
}
}
finally
{
if (lockEntered)
readerWriterLock.ExitWriteLock();
}
}
/// <summary>
/// Gets the Context for this thread, if one has been registered
/// </summary>
/// <returns>The Context for this thread, which would generally be defined in a using block using Context.CreateContext()</returns>
internal static Context GetThreadContext()
{
string threadID = System.Threading.Thread.CurrentThread.ManagedThreadId.ToString();
Console.WriteLine("GetThreadContext() " + threadID);
bool lockEntered = false;
int tryCount = 0;
try
{
while (!readerWriterLock.TryEnterReadLock(TimeSpan.FromTicks(5000)))
if (++tryCount > MAX_LOCK_TRIES)
throw new OperationCanceledException("Cannot get context (timeout)");
lockEntered = true;
Context registeredContext = null;
if (threadContexts.ContainsKey(threadID))
threadContexts[threadID].TryGetTarget(out registeredContext);
return registeredContext;
}
finally
{
if (lockEntered)
readerWriterLock.ExitReadLock();
}
}
}
// Imagine this is some third-party API
public static class ThirdPartyApi
{
// Imagine this is any call to the third-party API that returns an object from that API which we'd like to decorate with an extension method
public static ThirdPartyObject GetThirdPartyObject()
{
return new ThirdPartyObject();
}
}
// Imagine this is some class from a third-party API, to which we would like to add extension methods
public class ThirdPartyObject
{
internal ThirdPartyObject() { }
}
public static class ExtensionMethods
{
public static void DoSomething(this ThirdPartyObject o) {
// get the object I need to access resources to do my work
Console.WriteLine("o.DoSomething()");
Context c = Context.GetThreadContext();
c.Process(o);
}
}
You could test it pretty simply, with some code like this:
ThirdPartyObject o;
using (Context.CreateContext())
{
o = ThirdPartyApi.GetThirdPartyObject(); // or a call to my own code to get it, encapsulating calls to the third-party API
// Call the method we've tacked on to the third party API item
o.DoSomething();
}
try
{
// If the registered context has been disposed/deregistered, this will throw an error;
// there is of course no way of knowing when it will happen, but in my simple testing
// even this first attempt always throws an error, on my relatively unburdened system.
// This means that with this model, one should not access the using-block Context
// outside of the using block, but that's of course true in general of using statements
o.DoSomething();
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
System.Threading.Thread.Sleep(1000);
try
{
// Should almost certainly see an error now
o.DoSomething();
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
Pass the t variable to the extension method.
public static class ExtensionMethods {
public static void Foo(this Bar b, Thingamabob t) {
// Access t to enable this extension method to do its work, whatever that may be
}
}
public class Bar { }
public class Schlemazel {
public void DoSomething() {
using (Thingamabob t = new Thingamabob()) {
Bar b = new Bar();
b.Foo(t);
}
}
}

Efficient signaling Tasks for TPL completions on frequently reoccuring events

I'm working on a simulation system that, among other things, allows for the execution of tasks in discrete simulated time steps. Execution all occurs in the context of the simulation thread, but, from the perspective of an 'operator' using the system, they wish to behave asynchronously. Thankfully the TPL, with the handy 'async/await' keywords, makes this fairly straightforward. I have a primitive method on the Simulation like this:
public Task CycleExecutedEvent()
{
lock (_cycleExecutedBroker)
{
if (!IsRunning) throw new TaskCanceledException("Simulation has been stopped");
return _cycleExecutedBroker.RegisterForCompletion(CycleExecutedEventName);
}
}
This is basically creating a new TaskCompletionSource and then returning a Task. The purpose of this Task is to execute its continuation when the new 'ExecuteCycle' on the simulation occurs.
I then have some extension methods like this:
public static async Task WaitForDuration(this ISimulation simulation, double duration)
{
double startTime = simulation.CurrentSimulatedTime;
do
{
await simulation.CycleExecutedEvent();
} while ((simulation.CurrentSimulatedTime - startTime) < duration);
}
public static async Task WaitForCondition(this ISimulation simulation, Func<bool> condition)
{
do
{
await simulation.CycleExecutedEvent();
} while (!condition());
}
These are very handy, then, for building sequences from an 'operator' perspective, taking actions based on conditions and waiting for periods of simulated time. The issue I'm running into is that CycleExecuted occurs very frequently (roughly every few milliseconds if I'm running at fully accelerated speed). Because these 'wait' helper methods register a new 'await' on each cycle, this causes a large turnover in TaskCompletionSource instances.
I've profiled my code and I've found that roughly 5.5% of my total CPU time is spent within these completions, of which only a negligible percentage is spent in the 'active' code. Effectively all of the time is spent registering new completions while waiting for the triggering conditions to be valid.
My question: how can I improve performance here while still retaining the convenience of the async/await pattern for writing 'operator behaviors'? I'm thinking I need something like a lighter-weight and/or reusable TaskCompletionSource, given that the triggering event occurs so frequently.
I've been doing a bit more research and it sounds like a good option would be to create a custom implementation of the Awaitable pattern, which could tie directly into the event, eliminating the need for a bunch of TaskCompletionSource and Task instances. The reason it could be useful here is that there are a lot of different continuations awaiting the CycleExecutedEvent and they need to await it frequently. So ideally I'm looking at a way to just queue up continuation callbacks, then call back everything in the queue whenever the event occurs. I'll keep digging, but I welcome any help if folks know a clean way to do this.
For anybody browsing this question in the future, here is the custom awaiter I put together:
public sealed class CycleExecutedAwaiter : INotifyCompletion
{
private readonly List<Action> _continuations = new List<Action>();
public bool IsCompleted
{
get { return false; }
}
public void GetResult()
{
}
public void OnCompleted(Action continuation)
{
_continuations.Add(continuation);
}
public void RunContinuations()
{
var continuations = _continuations.ToArray();
_continuations.Clear();
foreach (var continuation in continuations)
continuation();
}
public CycleExecutedAwaiter GetAwaiter()
{
return this;
}
}
And in the Simulator:
private readonly CycleExecutedAwaiter _cycleExecutedAwaiter = new CycleExecutedAwaiter();
public CycleExecutedAwaiter CycleExecutedEvent()
{
if (!IsRunning) throw new TaskCanceledException("Simulation has been stopped");
return _cycleExecutedAwaiter;
}
It's a bit funny, as the awaiter never reports Complete, but fires continues to call completions as they are registered; still, it works well for this application. This reduces the CPU overhead from 5.5% to 2.1%. It will likely still require some tweaking, but it's a nice improvement over the original.
The await keyword doesn't work just on Tasks, it works on anything that follows the awaitable pattern. For details, see Stephen Toub's article await anything;.
The short version is that the type has to have a method GetAwaiter() that returns a type that implements INotifyCompletion and also has IsCompleted property and GetResult() method (void-returning, if the await expression shouldn't have a value). For an example, see TaskAwaiter.
If you create your own awaitable, you could return the same object every time, avoiding the overhead of allocating many TaskCompletionSources.
Here is my version of ReusableAwaiter simulating TaskCompletionSource
public sealed class ReusableAwaiter<T> : INotifyCompletion
{
private Action _continuation = null;
private T _result = default(T);
private Exception _exception = null;
public bool IsCompleted
{
get;
private set;
}
public T GetResult()
{
if (_exception != null)
throw _exception;
return _result;
}
public void OnCompleted(Action continuation)
{
if (_continuation != null)
throw new InvalidOperationException("This ReusableAwaiter instance has already been listened");
_continuation = continuation;
}
/// <summary>
/// Attempts to transition the completion state.
/// </summary>
/// <param name="result"></param>
/// <returns></returns>
public bool TrySetResult(T result)
{
if (!this.IsCompleted)
{
this.IsCompleted = true;
this._result = result;
if (_continuation != null)
_continuation();
return true;
}
return false;
}
/// <summary>
/// Attempts to transition the exception state.
/// </summary>
/// <param name="result"></param>
/// <returns></returns>
public bool TrySetException(Exception exception)
{
if (!this.IsCompleted)
{
this.IsCompleted = true;
this._exception = exception;
if (_continuation != null)
_continuation();
return true;
}
return false;
}
/// <summary>
/// Reset the awaiter to initial status
/// </summary>
/// <returns></returns>
public ReusableAwaiter<T> Reset()
{
this._result = default(T);
this._continuation = null;
this._exception = null;
this.IsCompleted = false;
return this;
}
public ReusableAwaiter<T> GetAwaiter()
{
return this;
}
}
And here is the test code.
class Program
{
static readonly ReusableAwaiter<int> _awaiter = new ReusableAwaiter<int>();
static void Main(string[] args)
{
Task.Run(() => Test());
Console.ReadLine();
_awaiter.TrySetResult(22);
Console.ReadLine();
_awaiter.TrySetException(new Exception("ERR"));
Console.ReadLine();
}
static async void Test()
{
int a = await AsyncMethod();
Console.WriteLine(a);
try
{
await AsyncMethod();
}
catch(Exception ex)
{
Console.WriteLine(ex.Message);
}
}
static ReusableAwaiter<int> AsyncMethod()
{
return _awaiter.Reset();
}
}
Do you really need to receive the WaitForDuration-event on a different thread? If not, you could just register a callback (or an event) with _cycleExecutedBroker and receive notification synchronously. In the callback you can test any condition you like and only if that condition turns out to be true, notify a different thread (using a task or message or whatever mechanism). I understand the condition you test for rarely evaluates to true, so you avoid most cross-thread calls that way.
I guess the gist of my answer is: Try to reduce the amount of cross-thread messaging by moving computation to the "source" thread.

Wait for pooled threads to complete

I'm sorry for a redundant question. However, I've found many solutions to my problem but none of them are very well explained. I'm hoping that it will be made clear, here.
My C# application's main thread spawns 1..n background workers using the ThreadPool. I wish for the original thread to lock until all of the workers have completed. I have researched the ManualResetEvent in particular but I'm not clear on it's use.
In pseudo:
foreach( var o in collection )
{
queue new worker(o);
}
while( workers not completed ) { continue; }
If necessary, I will know the number of workers that are about to be queued before hand.
Try this. The function takes in a list of Action delegates. It will add a ThreadPool worker entry for each item in the list. It will wait for every action to complete before returning.
public static void SpawnAndWait(IEnumerable<Action> actions)
{
var list = actions.ToList();
var handles = new ManualResetEvent[actions.Count()];
for (var i = 0; i < list.Count; i++)
{
handles[i] = new ManualResetEvent(false);
var currentAction = list[i];
var currentHandle = handles[i];
Action wrappedAction = () => { try { currentAction(); } finally { currentHandle.Set(); } };
ThreadPool.QueueUserWorkItem(x => wrappedAction());
}
WaitHandle.WaitAll(handles);
}
Here's a different approach - encapsulation; so your code could be as simple as:
Forker p = new Forker();
foreach (var obj in collection)
{
var tmp = obj;
p.Fork(delegate { DoSomeWork(tmp); });
}
p.Join();
Where the Forker class is given below (I got bored on the train ;-p)... again, this avoids OS objects, but wraps things up quite neatly (IMO):
using System;
using System.Threading;
/// <summary>Event arguments representing the completion of a parallel action.</summary>
public class ParallelEventArgs : EventArgs
{
private readonly object state;
private readonly Exception exception;
internal ParallelEventArgs(object state, Exception exception)
{
this.state = state;
this.exception = exception;
}
/// <summary>The opaque state object that identifies the action (null otherwise).</summary>
public object State { get { return state; } }
/// <summary>The exception thrown by the parallel action, or null if it completed without exception.</summary>
public Exception Exception { get { return exception; } }
}
/// <summary>Provides a caller-friendly wrapper around parallel actions.</summary>
public sealed class Forker
{
int running;
private readonly object joinLock = new object(), eventLock = new object();
/// <summary>Raised when all operations have completed.</summary>
public event EventHandler AllComplete
{
add { lock (eventLock) { allComplete += value; } }
remove { lock (eventLock) { allComplete -= value; } }
}
private EventHandler allComplete;
/// <summary>Raised when each operation completes.</summary>
public event EventHandler<ParallelEventArgs> ItemComplete
{
add { lock (eventLock) { itemComplete += value; } }
remove { lock (eventLock) { itemComplete -= value; } }
}
private EventHandler<ParallelEventArgs> itemComplete;
private void OnItemComplete(object state, Exception exception)
{
EventHandler<ParallelEventArgs> itemHandler = itemComplete; // don't need to lock
if (itemHandler != null) itemHandler(this, new ParallelEventArgs(state, exception));
if (Interlocked.Decrement(ref running) == 0)
{
EventHandler allHandler = allComplete; // don't need to lock
if (allHandler != null) allHandler(this, EventArgs.Empty);
lock (joinLock)
{
Monitor.PulseAll(joinLock);
}
}
}
/// <summary>Adds a callback to invoke when each operation completes.</summary>
/// <returns>Current instance (for fluent API).</returns>
public Forker OnItemComplete(EventHandler<ParallelEventArgs> handler)
{
if (handler == null) throw new ArgumentNullException("handler");
ItemComplete += handler;
return this;
}
/// <summary>Adds a callback to invoke when all operations are complete.</summary>
/// <returns>Current instance (for fluent API).</returns>
public Forker OnAllComplete(EventHandler handler)
{
if (handler == null) throw new ArgumentNullException("handler");
AllComplete += handler;
return this;
}
/// <summary>Waits for all operations to complete.</summary>
public void Join()
{
Join(-1);
}
/// <summary>Waits (with timeout) for all operations to complete.</summary>
/// <returns>Whether all operations had completed before the timeout.</returns>
public bool Join(int millisecondsTimeout)
{
lock (joinLock)
{
if (CountRunning() == 0) return true;
Thread.SpinWait(1); // try our luck...
return (CountRunning() == 0) ||
Monitor.Wait(joinLock, millisecondsTimeout);
}
}
/// <summary>Indicates the number of incomplete operations.</summary>
/// <returns>The number of incomplete operations.</returns>
public int CountRunning()
{
return Interlocked.CompareExchange(ref running, 0, 0);
}
/// <summary>Enqueues an operation.</summary>
/// <param name="action">The operation to perform.</param>
/// <returns>The current instance (for fluent API).</returns>
public Forker Fork(ThreadStart action) { return Fork(action, null); }
/// <summary>Enqueues an operation.</summary>
/// <param name="action">The operation to perform.</param>
/// <param name="state">An opaque object, allowing the caller to identify operations.</param>
/// <returns>The current instance (for fluent API).</returns>
public Forker Fork(ThreadStart action, object state)
{
if (action == null) throw new ArgumentNullException("action");
Interlocked.Increment(ref running);
ThreadPool.QueueUserWorkItem(delegate
{
Exception exception = null;
try { action(); }
catch (Exception ex) { exception = ex;}
OnItemComplete(state, exception);
});
return this;
}
}
First, how long do the workers execute? pool threads should generally be used for short-lived tasks - if they are going to run for a while, consider manual threads.
Re the problem; do you actually need to block the main thread? Can you use a callback instead? If so, something like:
int running = 1; // start at 1 to prevent multiple callbacks if
// tasks finish faster than they are started
Action endOfThread = delegate {
if(Interlocked.Decrement(ref running) == 0) {
// ****run callback method****
}
};
foreach(var o in collection)
{
var tmp = o; // avoid "capture" issue
Interlocked.Increment(ref running);
ThreadPool.QueueUserWorkItem(delegate {
DoSomeWork(tmp); // [A] should handle exceptions internally
endOfThread();
});
}
endOfThread(); // opposite of "start at 1"
This is a fairly lightweight (no OS primitives) way of tracking the workers.
If you need to block, you can do the same using a Monitor (again, avoiding an OS object):
object syncLock = new object();
int running = 1;
Action endOfThread = delegate {
if (Interlocked.Decrement(ref running) == 0) {
lock (syncLock) {
Monitor.Pulse(syncLock);
}
}
};
lock (syncLock) {
foreach (var o in collection) {
var tmp = o; // avoid "capture" issue
ThreadPool.QueueUserWorkItem(delegate
{
DoSomeWork(tmp); // [A] should handle exceptions internally
endOfThread();
});
}
endOfThread();
Monitor.Wait(syncLock);
}
Console.WriteLine("all done");
I have been using the new Parallel task library in CTP here:
Parallel.ForEach(collection, o =>
{
DoSomeWork(o);
});
Here is a solution using the CountdownEvent class.
var complete = new CountdownEvent(1);
foreach (var o in collection)
{
var capture = o;
ThreadPool.QueueUserWorkItem((state) =>
{
try
{
DoSomething(capture);
}
finally
{
complete.Signal();
}
}, null);
}
complete.Signal();
complete.Wait();
Of course, if you have access to the CountdownEvent class then you have the whole TPL to work with. The Parallel class takes care of the waiting for you.
Parallel.ForEach(collection, o =>
{
DoSomething(o);
});
I think you were on the right track with the ManualResetEvent. This link has a code sample that closely matches what your trying to do. The key is to use the WaitHandle.WaitAll and pass an array of wait events. Each thread needs to set one of these wait events.
// Simultaneously calculate the terms.
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateBase));
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateFirstTerm));
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateSecondTerm));
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateThirdTerm));
// Wait for all of the terms to be calculated.
WaitHandle.WaitAll(autoEvents);
// Reset the wait handle for the next calculation.
manualEvent.Reset();
Edit:
Make sure that in your worker thread code path you set the event (i.e. autoEvents1.Set();). Once they are all signaled the waitAll will return.
void CalculateSecondTerm(object stateInfo)
{
double preCalc = randomGenerator.NextDouble();
manualEvent.WaitOne();
secondTerm = preCalc * baseNumber *
randomGenerator.NextDouble();
autoEvents[1].Set();
}
I've found a good solution here :
http://msdn.microsoft.com/en-us/magazine/cc163914.aspx
May come in handy for others with the same issue
Using .NET 4.0 Barrier class:
Barrier sync = new Barrier(1);
foreach(var o in collection)
{
WaitCallback worker = (state) =>
{
// do work
sync.SignalAndWait();
};
sync.AddParticipant();
ThreadPool.QueueUserWorkItem(worker, o);
}
sync.SignalAndWait();
Try using CountdownEvent
// code before the threads start
CountdownEvent countdown = new CountdownEvent(collection.Length);
foreach (var o in collection)
{
ThreadPool.QueueUserWorkItem(delegate
{
// do something with the worker
Console.WriteLine("Thread Done!");
countdown.Signal();
});
}
countdown.Wait();
Console.WriteLine("Job Done!");
// resume the code here
The countdown would wait until all threads have finished execution.
Wait for completion of all threads in thread pool there is no inbuilt method available.
Using count no. of threads are active, we can achieve it...
{
bool working = true;
ThreadPool.GetMaxThreads(out int maxWorkerThreads, out int maxCompletionPortThreads);
while (working)
{
ThreadPool.GetAvailableThreads(out int workerThreads, out int completionPortThreads);
//Console.WriteLine($"{workerThreads} , {maxWorkerThreads}");
if (workerThreads == maxWorkerThreads)
{ working = false; }
}
//when all threads are completed then 'working' will be false
}
void xyz(object o)
{
console.writeline("");
}

Categories