I've a business application where i've a master application and multiple slave applications (geographically distributed) connected to each other. All the slave application interact through master application and master application should handle all the incoming requests as well as respond to the previous requests.
We're dealing with huge volume of data getting transferred between the master and child sites. So i need to handle all the pouring incoming requests and responses simultaneously and effectively. To be precise, i want all the nodes to communicate in a fail-safe manner.
I was looking at MSMQ for our requirement. I want you guys' opinion as how best this can be handled in .NET using MSMQ or any other proprietary or Open Source message queuing tool.
Thank you.
Regards
NLV
MSMQ is a reliable messaging protocol and will be able to achieve what you described above. If you look into WCF offerings, fundamentally all the messaging types will allow you to handle concurrent requests quite efficiently. The good thing about using WCF
is that through the configuration you can tweak to use different binding, transport protocols and size of the concurrent requests or threads so that you can keep adjusting until you find what is most optimal for your situation. It also takes care of the plumbing code for you and you dont necessary have to code specifically and tied to msmq only.
Related
I'm facing an issue in a High Available Windows Services I developed with a master/slave setup.
Context:
The services itself synchronize data to two endpoints. One endpoint is synced to a local database, and one is an external. The database that is local is duplicated on both machines, so both master and slave need to sync this. The external endpoint only needs to be synced once.
The Master will by default sync to the external service, the slave will take over when the master is down. When the master goes back up and the slave is still synchronizing to the external, master will ask slave to finish a portion of the work, and then tell the master is done so he can continue the remaining work.
All this needs to happen asynchronously, I do not want the program to stop and wait for the other to respond (like the slave still handling the data).
I already implemented all the logic for this.
Setup:
Two Windows services running on two different machines.
Currently the communication is done over Named Pipes.
The problem:
Named pipes isn't reliable enough for the throughput that is being done. It also often crashes, and isn't made for reconnecting/closing and reopening a lot of times. I also face the problem that it just 'hangs' a lot when sending/receiving messages. Retrying sometimes works but I think I shouldn't be retrying. I need to have a reliable communication between the two instances.
Solutions:
I've been looking for an alternative to Named Pipes, but can't seem to find a solution of which I'm convinced that would work. Mostly because a lot of the technologies are for communication between a service and a client over http.
WCF over MSMQ is also not what I need, because I only want communication to happen when both are online. WCF in general is also more focused on one endpoint receiving data and sending a response. I need bidirectional communication, so both instances need to be able to receive and send messages at any time.
I think my best option is SignalR, but I'm also not convinced.
Have you looked at MassTransit over RabbitMQ?
We have been using them together very successfully both for intra-service and client/service communication for a few years now.
I wanted to create my own mini-project with fictitious high-volume data (options trading) to be consumed by a WPF application, to better understand the design concepts and considerations that go into designing a real-time system and want to learn what sort of techniques and approaches are used. Please no mention of third party solutions like Tibco - this is for learning purposes. My intention is the WPF application refreshes its UI every 5 seconds
When designing my fictitious market data server, given that high-volume performance is a criteria a few quick ideas come to mind - multicast UDP (is this too low-level / a bad direction?), a messaging architecture using a queue eg MSMQ or RabbitMQ, a remote service host the client app initiates requests to eg via WCF TCP binding or web-service.
One thought I had was the clients maintain their own local queues and subscribe to topics that the pricing server broadcasts using a messaging solution? Or maybe the server would broadcast the data to all clients equally and leave it to the clients to filter and collate the data locally? In peoples experiences, what are the pros and cons of each approach and is there any other approach I missed here? I guess it comes down to - should the client be pulling data or should the server be pushing it out to them?
The other question is - what would the wire-format would these messages take? I'm primarily used to working with rich business object classes, separated into a repository layer, domain model (with methods for validation and workflow logic) and simple service layer. Could I still leverage this approach and still maintain my performance goals or would I need to create a more light-weight data payload format?
I would start designing such a system from the higher layers before going down to network level optimizations.
RabbitMQ provides different types of exchanges for routing messages. The approach of broadcasting all messages to every client (fanout exchange) is marginally fast on the RabbitMQ server side, but this will only work efficiently for low volume messages and provided the clients are connected via high speed links (e.g. local gigabit ethernet). Instead using a direct or topical exchange may significantly lower your network delays. You can read more about exchange types on the RabbitMQ website.
Your last question is about the wireformat. In theory RabbitMQ allows any string (or even binary) payloads, so its a matter of trying to squeeze more information into fewer bytes. In my experience, as long as your messages are not over network packet MTU, the gains of compression or choosing a clever encoding scheme are marginal.
In general, think of how much time you are spending on each optimization, and what is the expected ROI. IMO some optimizations are more useful than others.If I was you I would look very carefully at the RabbitMQ configuration parameters. For example, look if you can setup the rabbit MQ server with per-process message queues.
I have just finished creating an API where the requests from the API are forwarded to a back-end service via MassTransit/RabbitMQ using the Request/Response pattern. We are now looking at pushing this into production, and are wanting to have multiple instances of the application (both API and service) running on different services, with a load balancer distributing the requests between them.
This leaves us in a position where we could potentially lose all of the messages if one of the servers is taken out of the pool for any reason. I am looking at creating a RabbitMQ cluster between the servers (each server has a local install) and was wondering how I would go about setting up the competing consumers in this instance.
Does RabbitMQ or MassTransit handle this so that only one consumer will receive the request, or will all consumers receive it and attempt to respond? Also, with the RabbitMQ cluster, how will MassTransit/RabbitMQ handle a node failing?
You should take a look at this document.
http://www.rabbitmq.com/distributed.html
Explains the common distributed scenarios quite nicely. For your scenario I think federation would be a better fit than clustering. If you go for clustering you should look at mirrored queues.
If all you need is performance you are better of getting a single server to handle your message queuing and the other server will connect to it and produce/consume messages.
I don't know how Mass Transit works but, if Request/Response is used you should get a single delivery of message to a single consumer, if the message is not ack-ed (the consumer crashes) an other consumer should pick it up.
We're in the process of moving our .NET platform from using MSMQ to ActiveMQ. We pump 30+ million persistent messages through it a day, so throughput and capacity are critical to us. The way our MSMQ dependent applications are configured, is they write to local/private queues first. Then we have a local service that routes those messages to their respective remote queues for processing. This ensures the initial enqueue/write write is fast (yes, we can also use async enqueueing as well), and messages aren't lost if the remote servers are unavailable.
We were going to use the same paradigm for ActiveMQ, but now we've decided to move to using VM's with NAS storage for most of our application servers. This greatly reduces the write performance of each message since it's going to NAS, and I feel I need to rethink our approach to queueing. I'd like to know what is considered best practice for using ActiveMQ, with persistent, high throughput needs. Should I consider using dedicated queue servers (that aren't VM's)? But that would mean all writes from the application are going directly over the network. How do I deal with high availability requirements?
Any suggestions are appreciated.
You can deploy ActiveMQ instances in a network of brokers and the topology can include local instances as well as remote instances. I have deployed topologies containing a local instance of ActiveMQ so that messages are persisted as close to the sender as possible and then the messages are forwarded to remote ActiveMQ instances based on demand. With this style of topology, I recommend configuring the network connector(s) to disallow forwarding messages from all destinations. I.e., instead of openly allowing the forwarding of messages for all destinations, you may want to narrow the number of messages forwarded using the excludedDestinations property.
As far as high availability with ActiveMQ, the master/slave configuration is designed for exactly this. It comes in three flavors depending on your needs.
Hope that helps.
Bruce
I'm about to write a "server" application that is responsible to talk with external hardware. The application shall handle requests from clients. The clients send a message to the server, and if the server is busy with doing stuff with the hardware the new messages shall be stored in a queue that will be processed later.
The client shall also be able to cancel a request (if it is in the server's queue.). When the server application is finished with the hardware it shall be able to send the result back to the client that requested the job.
The server and client applications may or may not be on the same PC. All development is done in .NET (C#) 2005.
What is the best way to solve this communication problem?
MSMQ? SOAP? WCF? Remoting? Other?
Assuming you can use .NET 3.0 or greater then you probably want to WCF as the communications channel - the interface is consistent but it will allow you to use an appropriate transport mechanism depending on where the client and server are in relation to each other - so you can choose to use SOAP or MSMQ or a binary format or others as appropriate (and can roll your own if needed). It also covers the need for two way communication.
Queuing the messages at the server should probably be regarded as a separate problem - especially given the need to remove queued messages.
If clients and server processes are on the same machine, I think named pipes will give you the fastest raw byte transfer rate.
If the processes are across different machines, you'd need to use sockets-based approach.
Remoting is reportedly very slow. Based on the target OSes that you're planning to deploy the solution on, you could have options like WCF et al. However, the overhead of these protocols is something you may want to look at while deciding.
Remoting
If all development is done in .NET 2005, Remoting is the best way to go.
MSMQ would make some sense, though there are then security and deployment considerations. You could look at a service bus (such s NServiceBus or MassTransit) and there's also SQL Server Service Broker that could help (and can also be used by a service bus as the transport).
WCF would be another thing to look at, however that's really the across-network transport, so you'd probably still want the WCF calls to put a message on the server queue.
I don't recommend remoting, because it's hard to maintain a separation of concerns, and before you know it you're developing a really chatty interface without realising it. Remote calls are expensive in relative terms, so you should be trying to keep the messages fairly coarse-grained. WCF would be my recommendation. Not least because you can set it up to use a HTTP transport and avoid a lot of deployment and security headache.
The .NET Framework provides several ways to communicate with objects in different application domains, each designed with a particular level of expertise and flexibility in mind. For example, the growth of the Internet has made XML Web services an attractive method of communication, because XML Web services are built on the common infrastructure of the HTTP protocol and SOAP formatting, which uses XML. These are public standards, and can be used immediately with current Web infrastructures without worrying about additional proxy or firewall issues.
Not all applications should be built using some form of XML Web service, however, if only because of the performance issues related to using SOAP serialization over an HTTP connection.
Choosing Communication Options in .NET helps you decide which form of interobject communication you want for your application.