Async WCF integration with NServiceBus - c#

I am new to NServiceBus and am trying to do something that seems harder than it should be...so I am starting to wander if I am missing something about the nsb bigger picture.
Here is the scenario:
Expose WCF endpoint to client from which they request a long-running operation.
I'd like to map the inbound request to a NServiceBus Message.
Publish the message to the bus for processing.
Send a reply to the client acking that their request has been received and we will begin processing it.
Bus works the message through a handler.
When the work has been completed, call the client back on their "callback" endpoint (wcf) to give them the result of the long-running request that they made.
I welcome corrective criticisms, examples or links that may be of use. Thank you in advance!

There is potential for you to do this via the NSB pipeline. You can configure handlers to execute in the order you specify. In your case this would be book-ended with the notifications. Depending on the use case it may be better to forward the notifications to another endpoint that handles just those types of communications. What you need to consider are the failure scenarios. If the handler fails and the message gets retried, what will happen?
This is all predicated on the idea that you do not need to maintain state. If you do, then you will want to look into using a Saga. This will keep state around per long running transaction and give you some more features you may require, such as timeouts.

A long running process can be either synchronous or asynchronous. It can't be both.
You can use NServiceBus for asynchronous processing of the long running task and generating of your progress information. Adam mentions the sagas. You can use a saga to keep track of progress. It will also help you with dividing you process into more granular tasks and give things like automatic retries that deal with transient failures for free.
However, you will have to use another mechanism to send the progress information back to the user. Periodic polling, long polling, hidden iframe, websockets, whatever - have a look at ideas exposed by SignalR. There's a nice video here that talks about sending notifications to browsers.

According to the NServiceBus website, you can expose your NSB endpoint as a WCF service:
Interoperability
You can expose your NServiceBus endpoints as WCF services with as
little as one line of code and the standard WCF configuration. All you
need to do is write an empty class that inherits from
NServiceBus.WcfService specifying the types of the request and the
response and NServiceBus does the rest as follows:
public class MyService : NServiceBus.WcfService<MyCommand, MyErrorCodes> { }
I have done some work integrating legacy MSMQ clients with NServiceBus - it works but you have to make sure the message is constructed correctly.
Messages sent to an NServiceBus endpoint must be enclosed in a <Messages/> envelope and must have a namespace. For example:
<Messages xmlns="http://tempuri.net/MyNservicebusMessage">
<MyNservicebusMessage body/> ...etc
</Messages>
Also, if you want to use NServiceBus auditing you have to ensure the MSMQ "Response Queue" message header has a value, although I don't think the value matters.

Related

Architecture for capturing result of webservice and send result to other webservices

I am developing an application in C#.
I am calling a webservice (lets call this WS1) and I want the result to be send to 1 or more other (external) webservices (eg WS2 and WS3).
In case one of the receiving webservices (for example WS2) is down, I want to make sure this call is not lost and is tried again at a later time.
What is a good architecure to achieve this?
Does anyone have a link to an online document where an architecture like this is described?
Some question you might need to ask before you dig into the architecture. I assume that WS1 and WS2 are both owned by you/your team.
How long do you want to wait for WS2 to be back up and running once it is down?
What is the response time expectation from WS1 and WS2?
Are there any other downstream service that is consuming WS1 and if that service has an SLA / response time expectation?
How does WS1 expect to consume the response from WS2?
In short an event driven approach looks the best fit here. i.e. you can have a queue in between WS1 and WS2 such that WS1 posts a message into the Request Queue, WS2 picks it up when ready and places the response in to a Response Queue from where WS1 can read it.
Example. AWS and Azure.
This may or may not work based on how you may answer the previous queries. Sometimes it is better to use a regular REST based call with a retry strategy (Example exponential back off strategy). With this you may also be able to get a faster feedback on failures. One may choose this if answers for the above questions are
If the wait time is short, i.e. in terms of seconds
There is an expectation of really fast response time. In which case it is better to report a failure immediately than wait on it.
If there are downstream applications that has a synchronous dependency on WS1 hence WS1 cannot endlessly wait for WS2 to process the request
There isn't a predicable response channel from WS2
On a note, if you use a event based architecture then WS2 may not be a web service anymore :)
I think, you might need a Load Balancer for your web service architecture.
You can use a raspberry pi or any pc, install linux and run nginx as load balancer, like what im currently doing.
You can read more at
How to configure Nginx as Load Balancer
If I understand correctly you are looking to have an event driven architecture where requests can be replayed.
SNS is similar to SQS in decoupling your architecture and is either a topic or a queue. The difference is whether you want to poll or push these requests.
SNS use case:
You will push messages to a SNS topic where it will store the message for a max of 14 days. You can then schedule the SNS topic to deliver messages to a rest endpoint. If it fails you can handle that by putting the message on a DLQ (dead letter queue). If it succeeds the message will be removed from the topic.
SQS use case:
You will push messages to a SNS topic where it will store the message for a max of 14 days. You then poll the queue for events if the event processes remove it from the queue. Otherwise you can use the DLQ strategy or just leave the message on the queue.
Some good reading is SNS Fanout strategies

Failure while sending batch events to azure eventhub

I'm trying to send a batched events using Microsoft.Azure.EventHubs call "SendAsync(IEnumerable)".
Is this a transactional operation i.e. is there possiblity of partial success/failure? Is there any official documentation confirming that there will be no duplicates if I resend in case this API throws an exception?
All sendAsync calls either all succeed or fail together--partial success is not possible.
If the sendAsync call doesn't throw an exception, then the service will take responsibility of delivering the events. If it throw, then the events are not sent. If the request times out before getting a response from the service, then it's unclear whether the sent was successful or not.
Everytime you send event(s), neither the EventHubProducerClient nor the Event Hubs service has any notion of identifying an event -- it's a new piece of data
(Your docs are for v4 of the library, which doesn't have the EventHubProducerClient. This is from version 5.2.0 in case you're wondering.)
If you're worrying about duplication of events, then you'll need to determine what the needs of your application are -- is it better to lose data or to deal with it when processing? (You can add custom metadata to events to help you decide how to process them--sample.) Event Hubs has an at-least-once guarantee, so even if you don't publish duplicates, there is a chance the service will return some events multiple times. This is common for a messaging service.
If you have more questions about this, I'd recommend filing an issue in this GitHub repo where the new library lives.

AMQP routing messages to multiple queues in order

Little Background
My app publishes messages to an exchange, this app knows nothing about where the message needs to go, or where it's going to go.
The message needs to travel through a couple steps (queues) in a pipeline. For simplicity we'll call them pre-process process and post-process.
My question is
Is there anything in AMQP (Rabbit-mq specifically) that can help me with the handling of these messages in that order? (pre-process, then process, then post-process).
Solutions I currently know of
Handling the routing logic in the services themselves, so the pre-process service would have to know that the next step is process. The services would handle publishing the messages to the next exchange or queue.
The only issue I have with this, is I don't necessarily want the pre-process service to know, or care about where the messages have to go next. If I need to add in another service in-between pre-process and process, I would have to change app code or configuration in pre-process, and then also make sure that the new service also knows that the next step is process.
Using some type of service bus.
I don't know much about a Service Bus, but I think this is what it's made to handle.
The only issue I have with a service bus is that all the implementations I have looked at (NServiceBus, MassTransit) look pretty heavyweight. They have their own new sets of terminology, they have lots of features, and if something goes wrong we now need to be experts in this specific bus technology, they seem to add a ton of un-needed complexity to the process.
Creating my own router service.
Each message would contain info in it's headers about which queue it's hit. The router would then be in charge of sending the message to the correct service. Each service would always publish it's messages back to the router when they are finished doing their job.
Is there any smells with doing something like this? The only issue I see with it, is that it seems like we're basically taking a lot of control away from our queue system which has pretty great routing capabilities.
Any thoughts on the matter, or some examples from the trenches would be great.
Rabbit has great routing capabilities but it does not do service orchestration. It is more the scope of a service bus which can rely on a messaging queue.
You can refine 1. :
A message is self-sufficient and contains all the routing logic.
For example message can contain an header with all the routing logic associated to a processsing chain. For example a property routings:
"routings": ["pre-process" , "process", "post-process"]
So a process step does not need to know the next process step. It pops the first entry of the routings array and sent the next message to this queue. Pretty suitable if the processing step is linear, does not require conditionnal step or historization.
So each service must contain the routing logic.
The third solution is simpler to manage (separation of concern between service). One service is responsible of the routing and it calls through RabbitMQ the appropriate process step. The smell may be that it needs more messages than the first solution. The cost of this drawback depends on your requirement. In fact to improve this, you will tend towards a service bus which will be a mix of solution 1) and 3).
I used at work the third solution. The processing steps are defined by a state machine.
I think, you can play with topic exchange (https://www.rabbitmq.com/tutorials/tutorial-five-dotnet.html).
You can code processing history to routing key.
First processor can bind to known exchange with private queue and subscribe to messages with routing key "raw" (for example) and publish new message with routing key "raw.proc1" on same exchange.
Second processor can subscribe to messages with routing key "raw.proc1" on same exchange.
Third processor can looked for messages with routing key "raw.proc1.proc2" on same exchange and so on.

Calling SignalR client from webfarm

I have the following message transport scenarios
Client -> Calls SignalR -> Calls NServiceBus -> Process Message internally -> Calls NServiceBus Gateway service with Result -> Calls SignalR Hub -> Updates the client with result.
In choosing whether to use SignalR vs. long polling, I need to know if SignalR is scaleable. So in doing my homework I came across SignalR on Azure Service Bus. The setup is done on the Global.asax application start.
Ultimately I need to be able to do this, from inside an NServiceBus handler:
var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>();
context.Clients.Group(group).addMessage(message);
The question is if context will be jacked up, because I'm (potentially) calling it from another machine than the one the client was connected to?
Also what is the sharding schema that the SignalR implementation uses to seed the topics? I know I can configure it to use N-number of topics, but how is it actually determining which message goes to which topics and if it's relevant from an external caller PoV.
You should be able to use GlobalHost.ConnectionManager.GetHubContext in any application where you have registered ServiceBusMessageBus as your IMessageBus via SignalR's GlobalHost.DepenencyResolver. This is done for you if you call GlobalHost.DepenencyResolver.UseServiceBus(...) in the application.
If you do this, a message will be published to Azure Service Bus for each call to addMessage or any other hub method on the IHubContext returned from GetHubContext. If there are subscribed clients connected to other nodes in the web farm, the other nodes will pick up the message from Service Bus and relay it to the subscribed clients.
Which topic a message goes to should not be relevant from the PoV of an external caller. You can use multiple topics to improve throughput, but for most use cases one should be sufficient.
If you choose to use more than one topic, you can think about the topic a message goes to as being essentially random. The only thing that is guaranteed is that messages from the same sender will go to the same topic. This allows SignalR to keep messages from the same sender in order.
Caveat emptor: SignalR has not yet had an official release supporting scale out. The 1.1 version will be the first release to support scale out officially.

nServiceBus used with sockets

I am fixing a .net app written on top of nServiceBus and have a few questions:
The app is configured AsA_Publisher and when it starts it waits for incoming
connections on a socket, do you know why it might have been implemented like so?
Why sockets? This socket is created during the Run method of a class which implements class IWantToRunAtStartup.
Once a message arrives, the message is written to a queue (Q1). The message
is then read from the queue(Q1). The format of the message is changed and then
inserted into yet another queue (Q2). The message is then read from the queue
(Q2) and sent to another application by calling a web service. The whole idea is
to change the message format and send it off to the final destination. If
nServiceBus is built on top of MSMQ, then why is the application creating more
queues and managing them?
I see absolutely nothing about Publish or Subscribe anywhere in the project. I guess it is relying on the socket to receive messages and if so then it is not really taking advantage of nServiceBus's queuing facility? Or am I lost...
If queues are needed and if I was to build this I will have one app writing to
the queue (Q1), another app reading from the queue (Q1) and changing the format
and inserting to another queue (Q2) and finally a third app reading from the
(Q2) and sending it off to the web service. What do you think?
Thanks,
I see nothing wrong with opening a socket in Run in an IWantToRunAtStartup. It must somehow be required that the service can be reached through some custom protocol implemented on top of sockets.
Processing the incoming socket messages by immediately bus.Sending a message is also the way to go - the greatest degree of reliability is achieved by immediately doing the safest thing possible: sending a durable message.
Performing the message translation in a handler and bus.Sending the result in another message is ALSO the way to go - especially if the translation is somehow expensive and it makes sense to be able to pick up processing at this point if e.g. the web service call fails.
Making a web service call in a message handler is also be the way to go - especially if the web service call is idempotent, so it doesn't break anything if the message ever gets retried.
In other words, it sounds like the service correctly bridges a socket-based interface to a web service-based interface.
It sounds weird, however, that the service employs multiple queues to achieve this. With NServiceBus it would be entirely sufficient with one single queue: the service's input queue.

Categories