Cancellable events through Publish-Subscribe pattern - c#

Based on lack of search results no matter how I word this, I'm pretty sure I'm thinking about this wrong.
I have a program that will be creating a large number of objects and there will be a number of events that should be wired up to listen to all the other instances, especially as soon as the instance is created. Managing this through pure events doesn't make sense to me. So that's my thought for using the pub/sub pattern to make things easier to handle. Also the plan is for the pub/sub to be purely in-process, so the events would not cross any boundaries. Also the events would not be persisted anywhere outside of memory, so there's no playback of events.
The problem comes with events that typically are CancelEventArgs. Is there a way to publish an event that subscribers can mark as being Cancelled?
Here's my current thoughts at a possible solution:
Publish a ShouldCancelEventX event and wait for some amount of time for an EventXCancelled event to be published back. If none are published in the time span, publish EventX event. The biggest issue I see with this is the arbitrary time span to wait for the events.
Have the pub/sub implementation have a little more logic so that it can notify the publisher after all subscribers have received the event. This would allow the publisher of ShouldCancelEventX to know when all the subscribers have received the message. This just feels wrong as every implementation of pub/sub I've seen provides void Publish methods. So that, again, leads me to believe I'm thinking about this the wrong way.

Related

Is Azure Service Bus message pump really event-driven?

So we've been looking into the Azure Service Bus recently and we're a bit confused as to whether we should use an infinite loop to poll the queue/subscription or whether we should use the OnMessage callback/message pump functionality. What is going to execute fewer operations and thus cost less?
Ideally we want an event-driven system so we aren't wasting operations and it's just generally a much nicer approach.
My question is, is using OnMessage which is defined as "Processes a message in an event-driven message pump" really event-driven?
If you take a look at this page (QueueClient.OnMessage): https://msdn.microsoft.com/library/azure/microsoft.servicebus.messaging.queueclient.onmessage.aspx you'll notice the remark at the bottom which states that it is basically a wrapper around an infinite loop which is calling the Receive() method. That doesn't sound very event-driven to me.
Now if you look at this page (SubscriptionClient.OnMessage):
https://msdn.microsoft.com/en-us/library/azure/dn130336.aspx, that remark is not present. So is it the same for topics/subscriptions and queues or is it actually event-driven for subscriptions but not for queues?
Why are they even saying it's event-driven when it's clearly not? The fact that the remark on the QueueClient.OnMessage page has the words "infinite loop" and "every receive operation is a billable event" is somewhat scary.
Also, I'm not really concerned about how much/little it will cost either way, I'm more interested in making it as efficient as possible.
I've not used OnMessage, but the question interested me so I did some digging.
My understanding is that the OnMessage approach just encapsulates away some of the usual concerns to do with processing messages from a queue to give you a cleaner/easier way to do it with a lot less to be concerned about. So instead of writing all the scaffolding around polling, you can focus more on a "push-like/event driven" implementation (message pump model).
And so you are correct in that it is basically still just a loop calling Receive() - so with the default timeouts, the number of polls would be the same and therefore same cost.
I came across these references:
http://fabriccontroller.net/introducing-the-event-driven-message-programming-model-for-the-windows-azure-service-bus/
http://www.flyersoft.net/?p=971 - check the comments too, as this covers the same question as yours.
So is it the same for topics/subscriptions and queues or is it
actually event-driven for subscriptions but not for queues?
I am not 100%, but my assumption based on my research is that it is the same and it's just a case that the documentation is not clear.

Implementing a A Publish-Subscribe Channel using NServiceBus

I am trying to implement a Publish-Subscribe Channel using NServiceBus. According to the Enterprise Integration Patterns book, a Pulish-Subscribe Channel is described as:
A Publish-Subscribe Channel works like this: It has one input channel
that splits into multiple output channels, one for each subscriber.
When an event is published into the channel, the Publish-Subscribe
Channel delivers a copy of the message to each of the output channels.
Each output end of the channel has only one subscriber, which is
allowed to consume a message only once. In this way, each subscriber
gets the message only once, and consumed copies disappear from their
channels.
Hohpe, Gregor; Woolf, Bobby (2012-03-09). Enterprise Integration
Patterns: Designing, Building, and Deploying Messaging Solutions
(Addison-Wesley Signature Series (Fowler)) (Kindle Locations
2880-2883). Pearson Education. Kindle Edition.”
There is a sample containing a publisher and subscriber at: http://docs.particular.net/samples/step-by-step/. I have built the sample solution for version 5. I then ran multiple subscribers in different command line windows to see how the system behaves.
Only one subscriber receives the event that is published, even though there are multiple subscribers. Publishing multiple events causes at most one subscriber to handle the event.
I cannot find any information about how to configure NServiceBus as a Publish-Subscribe Channel as defined in the quoted text. Does anyone know how to do this? Is this not supported?
[Update 2 Feb 2016]
I did not rename my endpoints after copying the subscribers. That gave me the desired behaviour.
If you are running multiple instances of the same subscriber, then what you are describing is the intended functionality.
Scenarios
1 Publisher, 1 Logical Subscriber
Some processor publishes an event and an email handler is subscribed to that event. When the event is consumed by the email handler, the email handler will send an email. In this scenario, there is only one logical subscriber, the email handler. Therefore, only one copy of the event is sent.
1 Publisher, 2 Logical Subscriber
In the next scenario, there are two logical subscribers: the invoice Handler and the email handler. When processor publishes an event, two copies of the event are sent. One to the invoice handler and one to the email handler.
1 Publisher, 2 Instances of 1 Logical Subscriber
In this scenario, there is only one logical subscriber even though there are two services subscribed to the event. In this case, only one copy of the event is sent and only one of the email handlers will process the event. If both email handlers processed the event then you would have N operations done for N number of instances of a subscriber. In other words, two emails would be sent instead of just one. Most likely, this scenario needed two email handlers because a single handler couldn't keep up with the load of the processor, or, was required for redundancy.
Summary
If you simply spin up multiple instances of the same subscriber, you will still only have one subscriber handle that event. This is by design. Otherwise, the work would be duplicated for every additional process.
If you want to see two logical subscriber, create a new project within that solution, with a different name, and subscribe to the same event (either in code or with config files). Then launch the publisher and one instance of each subscriber. Once the publisher publishes an event, you'll see both subscribers process the event.
The subscribers need to start up first to be able to send the message that they're interested in subscribing to an event. Then the publisher needs to boot up, have the time to process the subscription messages. When all subscriptions are stored, only then can you publish messages. If you publish messages before all subscriptions are actually stored, NServiceBus will only send the message to the subscribers it already knows about. A second later all subscribers might be known, but by then you've already published your message.
When using durable persistence, like SQL Server or something like it, the subscriptions will be stored and kept. So after restarting the service, immediately all subscribers are known. With in-memory storage, the subscriptions are lost every single time your publisher is restarted. So the need to wait a bit until all subscriptions are processes, is longer.
It can also be a problem that not every subscriber is actually sending out a message, because you might've gotten the configuration wrong.
I've written a tutorial myself which might help out as well.

Should IObservable be preferred over events when exposing notifications in a library targeting .NET 4+

I have a .NET library which, as part of an Object Model will emit notifications of certain occurrences.
It would seem to me that the main pros of events are approachability for beginners (and simplicity in certain consumption contexts) with the main negative being that they are not composable and hence are immediately forced into an Observable.FromEvent* if you want to do anything interesting without writing a code thicket.
The nature of the problem being solved is such that the event traffic won't be particularly frequent or voluminous (it's definitely not screaming RX), but there is definitely no requirement to support .NET versions prior to 4.0 [and hence I can use the built-in IObservable interface in System.Reactive without forcing any significant dependencies on consumers]. I'm interested in some general guidelines some specific concrete reasons to prefer IObservables over events from an API design perspective though - regardless of where my specific case might sit on the event - IObservable spectrum.
So, the question:
Is there anything concrete I'm making dramatically more difficult or problematic for API consumers if I go with the simplest thing and expose an event instead of an IObservable
Or, restated: Aside from the consumer having to do an Observable.FromEvent* to be able to compose events, is there really not a single reason to prefer an IObservable over an event when exposing a notification in an API?
Citations of projects that are using IObservable for not-screaming-RX stuff or coding guidelines would be ideal but are not critical.
NB as touched on in the comments with #Adam Houldsworth, I'm interested in concrete things wrt the API surface of a .NET 4+ library, not a survey of opinions as to which represents a better 'default architecture' for our times :)
NB this question has been touched on in IObserver and IObservable in C# for Observer vs Delegates, Events and IObservable vs Plain Events or Why Should I use IObservable?. The aspect of the question I'm asking has not been addressed in any of the responses due to SRP violations. Another slightly overlapping question is Advantages of .NET Rx over classic events?. Use of IObservable instead of events falls into that same category.
In the comments of this answer, OP refined his question as:
[Is it] indeed definitely the case that each and every event can
always be Adapted to be an IObservable?
To that question, the answer is basically yes - but with some caveats. Also note that the reverse is not true - see the section on reverse transformation and the conclusion for reasons why observables might be preferred to classic events because of the additional meaning they can convey.
For strict translation, all we need to do is map the event - which should include the sender as well as arguments - on to an OnNext invocation. The Observable.FromEventPattern helper method does a good job of this, with the overloads returning IObservable<EventPattern<T>> providing both the sender object and the EventArgs.
Caveats
Recall the Rx grammar. This can be stated in EBNF form as:
Observable Stream = { OnNext }, [ OnError | OnCompleted ] - or 0 or more OnNext events optionally followed by either an OnCompleted or an OnError.
Implicit in this is the idea that from the view of an individual subscriber events do not overlap. To be clear, this means that a subscriber will not be called concurrently. Additionally, it is quite possible that other subscribers can be called not only concurrently but also at different times. Often it is subscribers themselves that control pace of event flow (create back-pressure) by handling events slower than the pace at which they arrive. In this situation typical Rx operators queue against individual subscribers rather than holding up the whole subscriber pool. In contrast, classic .NET event sources will more typically broadcast to subscribers in lock-step, waiting for an event to be fully processed by all subscribers before proceeding. This is the long-standing assumed behaviour for classic events, but it is not actually anywhere decreed.
The C# 5.0 Language Specification (this is a word document, see section 1.6.7.4) and the .NET Framework Design Guidelines : Event Design have surprisingly little to say on the event behaviour. The spec observes that:
The notion of raising an event is precisely equivalent to invoking the
delegate represented by the event—thus, there are no special language
constructs for raising events.
The C# Programming Guide : Events section says that:
When an event has multiple subscribers, the event handlers are invoked
synchronously when an event is raised. To invoke events
asynchronously, see Calling Synchronous Methods Asynchronously.
So classic events are traditionally issued serially by invoking a delegate chain on a single thread, but there is no such restriction in the guidelines - and occasionally we see parallel invocation of delegates - but even here two instances of an event will usually be raised serially even if each one is broadcast in parallel.
There is nothing anywhere I can find in the official specifications that explicitly states that event instances themselves must be raised or received serially. To put it another way, there is nothing that says multiple instances of an event can't be raised concurrently.
This is in contrast to observables where it is explicitly stated in the Rx Design Guidelines that we should:
4.2. Assume observer instances are called in a serialized fashion
Note how this statement only addresses the viewpoint of an individual subscriber instance and says nothing about events being sent concurrently across instances (which in fact is very common in Rx).
So two takeways:
Whilst OnNext captures the idea of an event it is possible that the classic .NET event may violate the Rx Grammar by invoking events concurrently.
It is common for pure Rx Observables to have different semantics around the delivery of events under load because back-pressure is typically handled per subscriber rather than per-event.
As long as you don't raise events concurrently in your API, and you don't care about the back-pressure semantics, then translation to an observable interface via a mechanism like Rx's Observable.FromEvent is going to be just fine.
Reverse Transformation
On the reverse transformation, note that OnError and OnCompleted have no analogue in classic .NET events, so it is not possible to make the reverse mapping without some additional machinery and agreed usage.
For example, one could translate OnError and OnCompleted to additional events - but this is definitely stepping outside of classic event territory. Also, some very awkward synchronization mechanism would be required across the different handlers; in Rx, it is quite possible for one subscriber to receive an OnCompleted whilst another is still receiving OnNext events - it's much harder to pull this off in a classic .NET events transformation.
Errors
Considering behaviour in error cases: It's important to distinguish an error in the event source from one in the handlers/subscribers. OnError is there to deal with the former, in the latter case both classic events and Rx simply (and correctly) blow up. This aspect of errors does translate well in either direction.
Conclusion
.NET classic events and Observables are not isomorphic. You can translate from events to observables reasonably easily as long as you stick to normal usage patterns. It might be the case that your API requires the additional semantics of observables not so easily modelled with .NET events and therefore it makes more sense to go Observable only - but this is a specific consideration rather than a general one, and more of a design issue than a technical one.
As general guidance, I suggest a preference for classic events if possible as these are broadly understood and well supported and can be transformed - but don't hesitate to use observables if you need the extra semantics of source error or completion represented in the elegant form of OnError and OnCompleted events.
I've been reading a lot about Reactive extensions before finally dipping my toe, and after some rough starts I found them really interesting and useful.
Observables extension have this lovely optional parameter where you can pass your own time manager, effectively letting you manipulate time. In my case it helped a lot since I was doing some time related work (check this webservice every ten minutes, send one email per minute, etc) and it made testing a breeze. I would plug the TestScheduler in the component under test and simulate one day of events in a snap.
So if you have some workflows in your library where time plays a role in orchestration, I would really recommend using Observables as your output.
However if you are just raising events in direct responses to user inputs, I don't think that it's worth the added complexity for your users. As you noted they can wrap the event into their own Observable if needed.
You could have your cake and eat it too, although it would mean more work; offer a facade that turns your event library into a Observable-fueled one, by creating said Observable from your events. Or do it the other way: have an optional facade that suscribes to your observable and raise a classic event when triggered.
In my opinion there is a non-trivial technical step to take when dealing with reactive extensions and in this case it may come down to what your API consumers would be the most comfortable using
The IObservable is the IEnumerable of events, so the only question here is do you think IObservable will become the standard as the IEnumerable is now so yes its preferable, if you think its just a passing by thing that will fall in future use event instead.
The IObservable is better than event in most cases but I personally think I'll forget to use it as its not very common to be used and when the time arrives I'll have forgotten about it.
I know my answer is not of great help but only time will tell if RX will become the standard, I think it has good chances.
[EDIT] To make it more Concrete.
To the end user the only difference is that one is an interface an the other is not, making the interface more testable and expandable, because different sources can implement the same interface.
As Adam Houldsworth said one can be changed to the other easily to make no other difference.
To address your headline question:
No they should not be preferred on the basis that they exist in .NET 4 and are available to use. Preference depends on intended use, so a blanket preference is unwarranted.
That said, I would tend towards them as an alternative model to traditional C# events.
As I have commented on throughout this question, there are many ancillary benefits to approaching the API with IObservable, not least of which is external support and the range of choice available to the end consumer.
To address your inner question:
I believe there would be little difficulty between exposing events or IObserable in your API as there is a route to one from the other in both cases. This would put a layer over your API, but in actuality this is a layer you could also release.
It is my opinion that choosing one over the other isn't going to be part of the deciding reason why someone choose to use or not use your API.
To address your re-stated question:
The reason might be found in why there is an Observable.FromEvent in the first place :-) IObservable is gaining support in many places in .NET for reactive programming and forms part of many popular libraries (Rx, Ix, ReactiveUI), and also interoperates well with LINQ and IEnumerable and further into the likes of TPL and TPL DataFlow.
A non-Rx example of the observable pattern, so not specifically IObservable would be ObservableCollection for XAML apps.

Appropriate usage of C# event handlers

I'm currently building a C# application which will automatically authenticate a user against certain network resources when they connect to specific wireless networks.
At the moment, I'm using the Managed Wifi API to discover when a user connects / disconnects from a wireless network. I have an event handler, so that when any of these activities occurs, one of my methods is called to inspect the current state of the wireless connection.
To manage the state of the application, I have another class which is called the "conductor", which performs the operations required to change the state of the application. For instance, when the wireless card connects to the correct network, the conductor needs to change the system state from "Monitoring" to "Authenticating". If authentication succeeds, the conductor needs to change the state to "Connected". Disconnection results in the "Monitoring" state again, and an authentication error results in an "Error" state. These state changes (if the user requests) can result in TrayIcon notifications, so the user knows that they are being authenticated.
My current idea involves having the method used to inspect the current state of the wireless call the "authenticate" or "disconnect" methods within the state manager. However, I'm not sure if this is an appropriate use of the event handler -- should it instead be setting a flag or sending a message via some form of IPC to a separate thread which will begin the authentication / disconnection process?
In addition to the event handler being able to request connection / disconnection, a user can also perform it via the tray icon. As a result, I need to ensure these background operations are not blocking the tray's interactions with the user.
Only one component should be able to request a change of the system state at any time, so I would need to use a mutex to prevent concurrent state changes. However, how I should synchronous the rest of these components is a slight mystery to me.
Any advice or literature I should read would be appriciated. I have no formal training in C# language, so I apologize if I've misstated anything.
EDIT: Most importantly, I want to verify that an event will be executed as a separate thread, so it cannot block the main UI. In addition, I want to verify that if I have an event handler subscribed to an event, it will handle events serially, not in parallel (so if the user connects and disconnects before the first connection event is processed, two state changes will not be occurring simultaneously).
Any advice or literature I should read would be appriciated. I have no formal training in C# language, so I apologize if I've misstated anything.
That explains a few things. :)
I would read up on threads, event handling, and creation of system tray icons/interfaces.
It is important to note the following:
Events are processed on the same thread they are called from. If you want the processing of an event not to lock the GUI then you will need to have the button move the work to a different thread.
When an event is fired it passes the appropriate arguments to all the methods in its list. This is pretty much the same as calling one method which in turn calls all the others (see EventFired example). The purpose of events is not to call methods as we can do that already, it is to call methods which may not be known when the code is compiled (the click event on a button control would not be known when the library the control is in is compiled for example). In short, if you can call the method instead of using an event the do so.
void EventFired(int arg1, object arg2)
{
subscribedMethod1(arg1, arg2);
SubscribedMethod2(arg1, arg2);
SubscribedMethod3(arg1, arg2);
SubscribedMethod4(arg1, arg2);
SubscribedMethod5(arg1, arg2);
SubscribedMethod6(arg1, arg2);
SubscribedMethod7(arg1, arg2);
}
If you want to prevent a user interface from locking do the work on another thread. Remember though, user interface elements (forms, buttons, grids, labels, etc.) can only be accessed from their host thread. Use the control.Invoke method to call methods on their thread.
Removing an option from an interface is not a good way to prevent raceway conditions (the user starts a connect/disconnect while one is already running) as the user interface will be on a different thread and could be out of sync (it takes time for separate threads to sync up). While there are many ways to resolve this problem, the easiest for someone new to threading is to use a lock on the value. This way .NET will make sure only one thread can change the setting at a time. You will still need to update the user interface so the user knows the update is occurring.
Your general design sound fine. You could use 2-3 threads (1 for the user interface (tray icon), 1 for checking for new network connections, and 1 (could be merged with connection check) which checks the internet connection.
Hope this helps, let us know if you need more (or accept an answer).
As an option, alternative...
If I were you, and since you're starting anew anyway, I would seriously consider the
Rx Reactive Extensions
It gives a completely fresh look at events and event based programming and helps a lot exactly with the things you're dealing with (including synchronizing, dealing with threads, combining events, stopping, starting etc. etc.).
It might be a bit of a 'steep curve' to learn at start, but again, it might be worth it.
hope this helps,
To me it seems that you're going to overengineer the project.
You basically need to implement an event in Commander and in main application subscribe to them. That is.
If there is always one component can make a change and you can have more then one, using some sync mechanism, like a Mutex noted by you, is perfectly valid choice.
Hope this helps.
If you want to have at most one state change pending at any time it is probably best to have the event handlers of the external events you are listening to hold a lock during their execution. This ensure an easy way to program because you are guaranteed that the state of your app does not change underneath you. A separate thread is not needed in this particular case.
You need to make a distinction between the current state of the application and the target state. The user dictates the target state ("connected", "disconnected"). The actual state might be different. Example: the user wants to be disconnected but the actual state is authenticating. Once the authentication step is completed the state machine must examine the target state:
targetState == connected => set current state to connected
targetState == disconnected => begin to disconnect and set state to disconnecting
Separating actual and target state allows the user to change his mind any time and the state machine to steer towards the desired state.
It's hard to give a precise answer without seeing the whole (proposed) structure of your app. But in general, yes, it's OK to use an event hander for that sort of thing - though I'd probably move the actual implementation out to a separate method, so that you can more easily trigger it from other locations.
The comment about disabling the "Connect" button sounds right on to me, though it's quite conceivable you might need other forms of synchronization as well. If your app doesn't need to be multi-threaded, though, I'd steer away from introducing multiple threads just for the sake of it. If you do, look into the new Task API's that have been included as part of the Task Parallel Library. They abstract a lot of that stuff fairly well.
And the comment about not over-thinking the issue is also well-taken. If I were in your shoes, just beginning with a new language, I'd avoid trying to get the architecture just right at the start. Dive in, and develop it with the cognitive toolset you've already got. As you explore more, you'll figure out, "Oh, crap, this is a much better way to do that." And then go and do it that way. Refactoring is your friend.

Is it advisable to perform complicated calculations in an event handler?

Is it considered a bad practice when someone performs complicated calculations in an event handler?
Does a calculation-cluttered .OnResize event handler has performance penalties?
If so how to make your way around them? (especially the .Print event, since thats what draws on e.Graphics)
It is not considered bad, not as such.
If your code feels cluttered, clean it up - refactor it.
Unless the event handler should be fast (say a Paint event handler), there is no problem in having it do lots of work.
If you have very intensive calculations to do and still need to have a responsive UI, you need to run the calculations on a separate thread.
I think you mean Paint event, not Print. It's not recommended when you need smooth GUI user interaction: the risk is you can steal CPU time from GUI thread and the app will appear sluggish and unresponsive. If these calculations are really a problem for user interaction, the way out is doing calculations on a separate thread, calculating results in advance and storing them in a separate buffer.
Generally, do not keep a lot of calculations inside an event handler. In event handler call, callbacks are called one by one and if one of the callbacks throws an exception then other callbacks do not receive the event. Post the calculation to a thread so that other callbacks are not affected.
Events are usually used in an event-driven system, usually one driven by the user or where a user is involved. As such, it's a good idea to keep processing short and sweet.
Sometimes, events are called in order to do some processing - apply formatting, prompt the user, provide a chance for the calling code to 'customise' what's going on. This is similar to the strategy pattern (http://en.wikipedia.org/wiki/Strategy_pattern).
To take it a step further, the strategy pattern is a good way of having a contract with the user about how they can have their say about how a process is supposed to happen. For example:
Let's say you're writing a grid control, where the user can dictate formatting for each cell. With events:
User must subscribe to the FormatCell event
With something closer to a strategy pattern:
User must create a class implementing ICellFormatter, with a FormatCell method, and pass that to the grid control - via a property, constructor parameter, etc.
You can see that the event route is 'easier' for the user. But personally I prefer the second method every time - you're creating a clear-cut class whose job it is to deal with cell formatting. It also seems more obvious to me in that case that the FormatCell method could perform some degree of processing; rather than using an Event to perform some task, which seems a bit of lazy design. (Paint is an "event"?.. not really, it's something requested of your code)

Categories