In your typical pub sub pattern there are clients who subscribe to servers who publish events. In my application, the publisher continuously publishes events that comes asynchronously. In my clients, they lag sometimes in processing those events. My question is whether there is a work around whereby the client always takes the most recent event sent out by the publisher as oppose to having to process all the received events sequentially.
Take a look at the slow subscribers pattern from the zeromq guide:
http://zguide.zeromq.org/page:all#toc117
Also you can develop your own pubsub over NetMQ using dealer-router and implement your publishing strategy. I would suggest you use credit based flow control if you do that.
http://hintjens.com/blog:15
Related
I like to add azure service bus to one my projects. And the project does a few different tasks: sending email, processing orders, sending invoices etc.
What I am keen to know is, do I create seperate queues to process all these different tasks ? I understand that a queue has one sender and one reciever. That makes sense, but then I will end up with quite a few number of queues for a project. is that normal?
Based on your description:
The project does a few different tasks: sending email, processing orders, sending invoices etc.
These messages are not related to each other. I like to differentiate between commands and events. Commands are sent specifically to certain destination with an expectation of an outcome, knowing that operation could fail. With events it's different. Events are broadcasted and there are no expectations for success or failure. There's also no knowledge about consumers of events to allow complete decoupling. Events can only be handled using Topics/Subscriptions. Commands can be handled either with Queues or with Topics/Subscriptions (a topic with a single subscription would act as a queue).
If you go with events, you don't create separate consumer input queues. You create a topic and subscriptions on that topic. Let's say you'll have a PublisherApp and a ConsumerApp. PublisherApp could create a topic and send all messages to the events topic. ConsumerApp would create the required subscriptions, where each subscription would have a filter based on type of the message you'd like that subscription to receive. For your sample, it would be the following subscriptions:
SendMail
ProcessOrder
SendInvoice
In order to filter properly, your BrokeredMessages will have to have a header (property) that would indicate the intent. You could either come up with a custom header or use a standard one, like Label.
I wrote a blog post a while ago on topologies with ASB, have a look, it might give you more ideas on how you can set up your entities.
If topology & entities management is not what you'd like to do, there are good frameworks that can abstract it for your and allow your code to work w/o diving into details too much. NServiceBus and MassTransit are two good examples you can have a look at.
Full disclosure: I'm working on Azure Service Bus transport for NServiceBus.
First of all look at Azure Storage queue i just switched to it in almost the same scenario. In Storage queue there is NO monthly payment fee you pay for what you use.
Queue is not limited to receivers or senders. What i mean by that, is that you could have many listeners for a queue (in case your app is scaled) but as soon as listener picked up event then its locked and not visible to others. (By default timeout is around 30 sec in Azure storage queue and 60 sec in Service bus, so be aware if you need more time for processing your message you need renew lock otherwise you will end up with processing same message multiple times)
You can use one queue per all your events and depends on message body you can run different message processors. For instance in my project I am sending message with key Type which identify who is going to process this message. You can also use one queue per type and then in your listeners listen to multiple queues
Look at this link for comparison table
topics and subscriptions suit your scenario the most.
At the subscription end, you can filter the messages based on criteria,in your case, it can be your task i.e sendemail,processorder .
If you want to add more tasks in future, you will be free from making any changes on the service bus itself, and will only have do required changes on sender and receiver code.
If you use service bus queues or storage queues, in future , you have to create more queues for adding another tasks and it can become complicated on the management level of your azure infrastrcture.
There are 2 approaches based on your design.
Queue and Message: Message Body has an indicator tasks: sending email, processing orders, sending invoices etc. The code then process the message accordingly.
Topics and Subscriptions: Define topics for each tasks and brokered messages are processed accordingly. This should be better than the Queues.
I am trying to implement a Publish-Subscribe Channel using NServiceBus. According to the Enterprise Integration Patterns book, a Pulish-Subscribe Channel is described as:
A Publish-Subscribe Channel works like this: It has one input channel
that splits into multiple output channels, one for each subscriber.
When an event is published into the channel, the Publish-Subscribe
Channel delivers a copy of the message to each of the output channels.
Each output end of the channel has only one subscriber, which is
allowed to consume a message only once. In this way, each subscriber
gets the message only once, and consumed copies disappear from their
channels.
Hohpe, Gregor; Woolf, Bobby (2012-03-09). Enterprise Integration
Patterns: Designing, Building, and Deploying Messaging Solutions
(Addison-Wesley Signature Series (Fowler)) (Kindle Locations
2880-2883). Pearson Education. Kindle Edition.”
There is a sample containing a publisher and subscriber at: http://docs.particular.net/samples/step-by-step/. I have built the sample solution for version 5. I then ran multiple subscribers in different command line windows to see how the system behaves.
Only one subscriber receives the event that is published, even though there are multiple subscribers. Publishing multiple events causes at most one subscriber to handle the event.
I cannot find any information about how to configure NServiceBus as a Publish-Subscribe Channel as defined in the quoted text. Does anyone know how to do this? Is this not supported?
[Update 2 Feb 2016]
I did not rename my endpoints after copying the subscribers. That gave me the desired behaviour.
If you are running multiple instances of the same subscriber, then what you are describing is the intended functionality.
Scenarios
1 Publisher, 1 Logical Subscriber
Some processor publishes an event and an email handler is subscribed to that event. When the event is consumed by the email handler, the email handler will send an email. In this scenario, there is only one logical subscriber, the email handler. Therefore, only one copy of the event is sent.
1 Publisher, 2 Logical Subscriber
In the next scenario, there are two logical subscribers: the invoice Handler and the email handler. When processor publishes an event, two copies of the event are sent. One to the invoice handler and one to the email handler.
1 Publisher, 2 Instances of 1 Logical Subscriber
In this scenario, there is only one logical subscriber even though there are two services subscribed to the event. In this case, only one copy of the event is sent and only one of the email handlers will process the event. If both email handlers processed the event then you would have N operations done for N number of instances of a subscriber. In other words, two emails would be sent instead of just one. Most likely, this scenario needed two email handlers because a single handler couldn't keep up with the load of the processor, or, was required for redundancy.
Summary
If you simply spin up multiple instances of the same subscriber, you will still only have one subscriber handle that event. This is by design. Otherwise, the work would be duplicated for every additional process.
If you want to see two logical subscriber, create a new project within that solution, with a different name, and subscribe to the same event (either in code or with config files). Then launch the publisher and one instance of each subscriber. Once the publisher publishes an event, you'll see both subscribers process the event.
The subscribers need to start up first to be able to send the message that they're interested in subscribing to an event. Then the publisher needs to boot up, have the time to process the subscription messages. When all subscriptions are stored, only then can you publish messages. If you publish messages before all subscriptions are actually stored, NServiceBus will only send the message to the subscribers it already knows about. A second later all subscribers might be known, but by then you've already published your message.
When using durable persistence, like SQL Server or something like it, the subscriptions will be stored and kept. So after restarting the service, immediately all subscribers are known. With in-memory storage, the subscriptions are lost every single time your publisher is restarted. So the need to wait a bit until all subscriptions are processes, is longer.
It can also be a problem that not every subscriber is actually sending out a message, because you might've gotten the configuration wrong.
I've written a tutorial myself which might help out as well.
I'm developing a web application subscribing to NServiceBus events being published by a backend application. With one worker process this works as expected, but with multiple IIS worker processes on the same IIS server, only one process receives the events. I suppose this is due to all worker processes sharing the same input queue and therefore "stealing" events from each other. My question therefore is how to ensure that the event handlers in every worker process receive the events they have subscribed to?
While generating input queue names dynamically would solve the problem, it would soon leave a lot of unused queues around the system.
This sounds like a pretty common problem and so should have a common solution?
ANy feedback would be appreciated
/Magnus
Unfortunately NServiceBus does not support web gardens.
We will consider adding support for web gardens in the future, based on user demand.
We suggest for the mean time to consider virtualization as a scale out solution instead.
I have raised an issue, see https://github.com/Particular/NServiceBus/issues/2015, please consider adding extra comments to the issue.
Based on lack of search results no matter how I word this, I'm pretty sure I'm thinking about this wrong.
I have a program that will be creating a large number of objects and there will be a number of events that should be wired up to listen to all the other instances, especially as soon as the instance is created. Managing this through pure events doesn't make sense to me. So that's my thought for using the pub/sub pattern to make things easier to handle. Also the plan is for the pub/sub to be purely in-process, so the events would not cross any boundaries. Also the events would not be persisted anywhere outside of memory, so there's no playback of events.
The problem comes with events that typically are CancelEventArgs. Is there a way to publish an event that subscribers can mark as being Cancelled?
Here's my current thoughts at a possible solution:
Publish a ShouldCancelEventX event and wait for some amount of time for an EventXCancelled event to be published back. If none are published in the time span, publish EventX event. The biggest issue I see with this is the arbitrary time span to wait for the events.
Have the pub/sub implementation have a little more logic so that it can notify the publisher after all subscribers have received the event. This would allow the publisher of ShouldCancelEventX to know when all the subscribers have received the message. This just feels wrong as every implementation of pub/sub I've seen provides void Publish methods. So that, again, leads me to believe I'm thinking about this the wrong way.
I'm working on a instant messaging program in C#(for learning only).
Just wanna to know if my way is right or wrong.
I created a Client class whice contains a NetworkStream and Read/Write functions.
The server creates a new thread for every client, the thread listen for any new messages.
Any better way?
You don't necessarily need to spawn a thread for each client. I'd investigate the Observer design pattern as it addresses the publish-subscribe problem, which is a good way to look at an instant messaging application, particularly if you want multiple listeners to one talker.
Here's a good place to start: http://www.blackwasp.co.uk/Observer.aspx. This link discusses the Observer pattern and mentions instant messaging: http://www.oodesign.com/observer-pattern.html.
You may find that a single-threaded approach may be able to keep up with a lot of messages. Depending upon how you design you classes you may find it useful to put entire conversations in their own thread. You should also think about using queues to handle incoming and outgoing messages, with queue readers in their own thread as well.
Sounds like a fun project.
Try WCF. Here is a nice sample.