Alternative for MSMQ for queued WCF in the cloud? - c#

I'm trying to write a durable WCF service, whereby clients can handle that the server is unavailable (due to internet connection, etc) gracefully.
All evidence points to using the MSMQ binding, but I can't do that because my "server" is the Azure cloud, which does not support MSMQ.
Does anyone have a recommended alternative for accomplishing durable messaging with Azure?
EDIT: To clarify, its important that the client (which does not run on Azure) has durable messaging to the server. This means that if the internet connection is unavailable (which may happen often due to it being connected on 3G cellular), messages are stored for delivery locally.
Azure Queuing makes no sense because if the internet was reliable enough to deliver the message to the Azure queue, it could have just as easily delivered to the service directly.

It turns out nothing like this exists, so I'm developing it on my own. I have some basic queueing implemented and I'll have more updates soon. Stay tuned!

I would suggest some implementation that uses Azure queues. Basically, just put your "request" in a queue, read the queue, try to make the request, if the request succeeds delete the message from the queue, if not don't delete the message. The azure queue has a setting called Visibility timeout. This sets how long the message is hidden from potential future callers. So in the scenario I listed above, if you set your visibility timeout to 5 minutes your retries would occur every 5 minutes. See these links for more information:
http://wag.codeplex.com/
http://azuretoolkit.codeplex.com/
http://msdn.microsoft.com/en-us/library/dd179363.aspx

Related

how to Store messages at client side when server goes down using wcf?

I came across a situation in my work environment. where i have wcf service which receives messages from client and store in db. Now my problem is suppose server was down for 10 mins these 10 mins messages should be stored in client at some place and client should check for availability of server for every 1 min.Is there any procedure that i could follow or any help would be appreciated .Thank you
binding :netTCPBinding
MSMQ does exactly what your first sentence says - when you send an MSMQ message, if it can't get the remote queue then it stays with the client and the built-in MSMQ service retries in the background. That way your message, once sent, is "safe." It's going to reach its destination if at all possible. (If you have a massive message volume and messages need to be stored for a long time then storage capacity can be an issue, but that's very, very unlikely.)
Configure WCF to send/receive MSMQ messages
I'd only do this if it's necessary. It involves modifying both the service and the client, and the documentation isn't too friendly.
Here's the documentation for MsmqBinding. Steps 3 and 4 for configuring the WCF service are blank. That's not helpful! When I selected the .NET 4.0 documentation those details are filled in.
I looked at several tutorials, and if I was going to look at this I'd start with this one. I find that a lot of tutorials muddy concepts by explaining too many things at once and including unnecessary information about other parts of the writers' projects.
The client queues its messages locally
If you don't to make lots of modifications to your service to support MsmqBinding. You could just implement the queuing locally. If the WCF service is down, the client puts the message in a local MSMQ queue and then at intervals reads the messages back from that queue and tries sending to the WCF service again. (If the WCF service is still down, put the message back in the queue.)
I'd just send messages straight to the queue and have another process dequeue and send to WCF. That way the client itself just "fires and forgets" if that's okay.
That way you don't have to deal with the hassle of modifying your service, but you still get the benefit. If your message can't go to the WCF service then it goes someplace "safe" where it can even survive the client app terminating or the computer restarting.
Sending and receiving messages in a local queue is much easier to configure. Your client can check to see if the queues exist and create them if needed. This is much easier to work with and the code samples are much more complete and on-point.

Is it advisable to call a WCF MSMQ Queued endpoint from a WPF client?

I have messages that I want to send from multiple WPF client applications to a service that can be processed some time after being sent.
Because of expected intermittent connectivity issues between client and server and necessary down time for the service, I'm inclined to create a WCF service with a queued endpoint. This has worked well for me in the past when the client machines were actually other servers and few in number.
I'm concerned about doing this with many client machines primarily because I think it will be difficult to monitor so many outgoing queues to confirm that no traffic is being trapped on the client machines.
Has anyone tried doing this before?
If so, would you recommend it? Why or why not?
Even if you haven't done it, can you think of other pitfalls beside the operational issue of monitoring all those outbound queues?
Your question may be better worded as:
Should a system be rolled out with many nodes all using MSMQ?
If so this is the essence of messaging and is what such systems are designed for irrespective of whether they are JMS, Apache MQ, Websphere, SonicMQ, or MSMQ.
Also, "traffic is being trapped on the client machines" - how do you define trapped? Remember, the application may be quite happy for the message to be sitting locally for days before being forwarded to the remote host. Messaging systems have timeouts generally for both reaching the destination and for the destination to process it.
I think you will be fine.

MSMQ concurrent processing design issue

I have a windows service written in C# that reads from MSMQ and based on the type of the message it assigns them to Agents that process that message in a worker thread. The application starts with no agents and are created dynamically at runtime as messages arrive in the MSMQ
Here is a basic figure of how it works:
If the agent worker thread is busy doing work the message is queued to its local queue. So far so good. But if for some reason if the service is stopped, the local queue content is lost.
I am trying to figure out what could be the best way to handle this scenario. Right now the local queues are a System.Concurrent.ConcurrentQueue. I could probably use a Sql Ce db or some other persistent storage, but i am worried about performance. The other thing in my mind is to read from MSMQ only when agents are ready to process message, but the problem is that I don't know what message the MSMQ will contain.
What possible approaches can I take on this issue?
Your design is basically implements the following pattern: http://www.eaipatterns.com/MessageDispatcher.html
However, rather than using actual messaging you are choosing to implement the dispatcher in multithreaded code.
Rather, each processing agent should be an autonomous process with it's own physical message queue. This is what will provide message durability in case of failure. It also allows you to scale simply by hosting more instances of the processing agent.
I have built a similar system dependent on Redis. The idea is that it provides memory - fast data access isolated from the rest of the application, and will not shut down when my service does. Furthermore, it will eventually persist my data to the disk, so I get a good compromise between reliability and speed.
If you designed it so that each client read from its own message queue that would be hosted in Redis, you could keep the queue independent from the service's downtime, and each worker's load apportioned when you next start the service.
Why don't you simply create two new msms queues to receive the messages for Agenta and agentb, and create a new agent that ( transactionally ) fetch the command from the main queue and dispatch the message to the proper agent queue ?

Competing Consumers from load balancing application in MassTransit/RabbitMQ

I have just finished creating an API where the requests from the API are forwarded to a back-end service via MassTransit/RabbitMQ using the Request/Response pattern. We are now looking at pushing this into production, and are wanting to have multiple instances of the application (both API and service) running on different services, with a load balancer distributing the requests between them.
This leaves us in a position where we could potentially lose all of the messages if one of the servers is taken out of the pool for any reason. I am looking at creating a RabbitMQ cluster between the servers (each server has a local install) and was wondering how I would go about setting up the competing consumers in this instance.
Does RabbitMQ or MassTransit handle this so that only one consumer will receive the request, or will all consumers receive it and attempt to respond? Also, with the RabbitMQ cluster, how will MassTransit/RabbitMQ handle a node failing?
You should take a look at this document.
http://www.rabbitmq.com/distributed.html
Explains the common distributed scenarios quite nicely. For your scenario I think federation would be a better fit than clustering. If you go for clustering you should look at mirrored queues.
If all you need is performance you are better of getting a single server to handle your message queuing and the other server will connect to it and produce/consume messages.
I don't know how Mass Transit works but, if Request/Response is used you should get a single delivery of message to a single consumer, if the message is not ack-ed (the consumer crashes) an other consumer should pick it up.

Is this the correct approach to poll the database?

I am creating a WCF service (CALLER) for Azure. The service(CALLER) calls async methods of another third party service(EXTN). The third party service calls the callback methods of another WCF service (LISTNER) hosted by me on Azure. CALLER enter the service details in the databsae with status = PENDING.
In the callback service (LISTNER) I am updating the status of the request as COMPLETED/FAILED in the database.
But I want the CALLER should be notified when status is updated in the SQL Azure db.
I am thinking of creating a worker thread which will poll the database periodically to check the status update and notify the CALLER about this.
Is there any other better / efficient alternative to this approach?
The features you're looking for are implemented in the AppFabric service bus.
Not really. There is another way (not sure it works on azure) by using a the integrated SQL message queueing (queue on updates via trigger), and your thread could continously poll then (there is a way to have a the read WAIT for an etnry in teh queue, so you issue one and it waits), but besides that...
...no, not from the database level.
I have a similar application and I handle it by a ntification trigger OUTSIDE The database (i.e. notifications are sent from the business logic that values change).
Another option is to use Queues and have the caller poll for notification messages from the listener. The Service Bus can be used, by having the Caller subscribe to event notifications sent from the Listener. In your scenario though it doesn't provide much more than the Queues do - if you are behind the firewall, the Service Bus uses polling as well.
Queues are probably the most efficient way to send notifications - that's why they were created in the first place. The Service Bus is used to create semi-permanent connections between different services by providing a lot more features than simple message passing. That makes it a bit less flexible, requires a bit more programming. Its billing model (charge per SB connection) reflect this too. You are not expected to use a lot of SB connections.

Categories