In college we studied IBM's MQ Series which was middleware that you could send messages to that would be persisted in a Queue. MQ Series had what was called Guaranteed Message delivery, meaning that if you got the message sent to the Queue, the Queue would be persisted even if the server containing the queue message was turned off and turned back on again.
Does Microsoft have a similar technology that works with C# and Sharepoint?
Yes, it is called MSMQ (Microsoft Message Queuing).
Here is the official Microsoft FAQ for MSMQ.
If you'd like to go open-source, have a look at ActiveMQ from Apache Foundation.
ActiveMQ is cross platform. Libs are available for C# and other languages: http://activemq.apache.org/cross-language-clients.html
Since you're talking about Sharepoint, it implies your back end is SQL Server. SQL Server has it's own reliable messaging technology, namely Service Broker. The main advantage over MSMQ is that is completely integrated into the database engine, which means one single product to deploy and maintain, consistent backup/restore, integration into the SQL Server based high-availability/disaster recoverability (mirroring, SQL clustering), language/API integration with the database (you can run SELECT over your queues!) etc. Because it eliminates the need to engage in two-phase-commit DTC between MSMQ and your database to precess each message, it offers Significantly higher throughput. Also the scalability and capabilities are significantly higher (MSMQ has a 4GB queue limit, SSB has 2GB message limit and queue limit is the (total) disks capacity, basically database limits). The main drawback is lack of a client side programming API like WCF MSMQ channel. With SSB you have to program T-SQL using verbs like SEND and RECEIVE.
Related
I wanted to create my own mini-project with fictitious high-volume data (options trading) to be consumed by a WPF application, to better understand the design concepts and considerations that go into designing a real-time system and want to learn what sort of techniques and approaches are used. Please no mention of third party solutions like Tibco - this is for learning purposes. My intention is the WPF application refreshes its UI every 5 seconds
When designing my fictitious market data server, given that high-volume performance is a criteria a few quick ideas come to mind - multicast UDP (is this too low-level / a bad direction?), a messaging architecture using a queue eg MSMQ or RabbitMQ, a remote service host the client app initiates requests to eg via WCF TCP binding or web-service.
One thought I had was the clients maintain their own local queues and subscribe to topics that the pricing server broadcasts using a messaging solution? Or maybe the server would broadcast the data to all clients equally and leave it to the clients to filter and collate the data locally? In peoples experiences, what are the pros and cons of each approach and is there any other approach I missed here? I guess it comes down to - should the client be pulling data or should the server be pushing it out to them?
The other question is - what would the wire-format would these messages take? I'm primarily used to working with rich business object classes, separated into a repository layer, domain model (with methods for validation and workflow logic) and simple service layer. Could I still leverage this approach and still maintain my performance goals or would I need to create a more light-weight data payload format?
I would start designing such a system from the higher layers before going down to network level optimizations.
RabbitMQ provides different types of exchanges for routing messages. The approach of broadcasting all messages to every client (fanout exchange) is marginally fast on the RabbitMQ server side, but this will only work efficiently for low volume messages and provided the clients are connected via high speed links (e.g. local gigabit ethernet). Instead using a direct or topical exchange may significantly lower your network delays. You can read more about exchange types on the RabbitMQ website.
Your last question is about the wireformat. In theory RabbitMQ allows any string (or even binary) payloads, so its a matter of trying to squeeze more information into fewer bytes. In my experience, as long as your messages are not over network packet MTU, the gains of compression or choosing a clever encoding scheme are marginal.
In general, think of how much time you are spending on each optimization, and what is the expected ROI. IMO some optimizations are more useful than others.If I was you I would look very carefully at the RabbitMQ configuration parameters. For example, look if you can setup the rabbit MQ server with per-process message queues.
I am in the initial phase of investigating a message queueing solution for C# and I'd appreciate any experience, lessons learned, war stories, etc. I also have a couple of specific questions about MSMQ's suitability for our configuration.
Briefly, we have a distributed architecture: a variety of server-side events generate work items that are retrieved by our deployed clients. Those clients connect to the server, retrieve the work, and process it, then go back to "waiting for work to do."
A couple of relevant details about these clients:
We have a few hundreds of client installations today, and need to be ready for growth to tens of thousands in the next 18 months.
The clients never submit tasks to the server -- they are "pull only."
Payloads to the clients are <= 1K each.
We don't need heavy-weight authentication nor encryption of the traffic (though that's a fine bonus)
Our clients run on a variety of MS operating systems, >= WinXP-SP1. Some are parts of windows active directories, or windows domains, or ad hoc workgroups.
Mostly, the clients are idle. We want to efficiently "wait for work" then respond to the work as quickly as possible (i.e., we want clients to receive the work item ASAP after it's queued)
Occasionally, our clients disappear from the internet for a time: their machines are shut off for a day, or overnight, etc. We want work items to arrive when they're back online. Put another way, we do need message reliability.
We control all of the client and server code, but not the client environments (though our installers do install prerequisite software like .NET 3.5, if it's not there already)
So, given the above - will MSMQ work "naturally" for us? I've not found a clear answer to how (or if) MSMQ handles clients listening for messages when they aren't in the domain/active directory and when they are connecting over the internet. So far, my reading on MSMQ feels pretty "enterprise-centric" - is our non-enterprise requirement going to be a problem with MSMQ?
What other solutions have you used in the past in similar setups?
And, of course, what other questions should I be asking? ;-)
Thanks!
RabbitMQ sounds like the solution for you.
RabbitMQ
RabbitMQ .NET/WCF Library
If you are looking at a central server on the Internet with queues holding messages waiting for remote clients to read them then MSMQ is not the product for you (much as it pains me to say that).
MSMQ cannot pull messages over the Internet using HTTP.
You would have to open up port 135 and use the RPC protocol and that is not necessarily a great idea on the Internet.
Cheers
John Breakwell
We're in the process of moving our .NET platform from using MSMQ to ActiveMQ. We pump 30+ million persistent messages through it a day, so throughput and capacity are critical to us. The way our MSMQ dependent applications are configured, is they write to local/private queues first. Then we have a local service that routes those messages to their respective remote queues for processing. This ensures the initial enqueue/write write is fast (yes, we can also use async enqueueing as well), and messages aren't lost if the remote servers are unavailable.
We were going to use the same paradigm for ActiveMQ, but now we've decided to move to using VM's with NAS storage for most of our application servers. This greatly reduces the write performance of each message since it's going to NAS, and I feel I need to rethink our approach to queueing. I'd like to know what is considered best practice for using ActiveMQ, with persistent, high throughput needs. Should I consider using dedicated queue servers (that aren't VM's)? But that would mean all writes from the application are going directly over the network. How do I deal with high availability requirements?
Any suggestions are appreciated.
You can deploy ActiveMQ instances in a network of brokers and the topology can include local instances as well as remote instances. I have deployed topologies containing a local instance of ActiveMQ so that messages are persisted as close to the sender as possible and then the messages are forwarded to remote ActiveMQ instances based on demand. With this style of topology, I recommend configuring the network connector(s) to disallow forwarding messages from all destinations. I.e., instead of openly allowing the forwarding of messages for all destinations, you may want to narrow the number of messages forwarded using the excludedDestinations property.
As far as high availability with ActiveMQ, the master/slave configuration is designed for exactly this. It comes in three flavors depending on your needs.
Hope that helps.
Bruce
I'm about to write a "server" application that is responsible to talk with external hardware. The application shall handle requests from clients. The clients send a message to the server, and if the server is busy with doing stuff with the hardware the new messages shall be stored in a queue that will be processed later.
The client shall also be able to cancel a request (if it is in the server's queue.). When the server application is finished with the hardware it shall be able to send the result back to the client that requested the job.
The server and client applications may or may not be on the same PC. All development is done in .NET (C#) 2005.
What is the best way to solve this communication problem?
MSMQ? SOAP? WCF? Remoting? Other?
Assuming you can use .NET 3.0 or greater then you probably want to WCF as the communications channel - the interface is consistent but it will allow you to use an appropriate transport mechanism depending on where the client and server are in relation to each other - so you can choose to use SOAP or MSMQ or a binary format or others as appropriate (and can roll your own if needed). It also covers the need for two way communication.
Queuing the messages at the server should probably be regarded as a separate problem - especially given the need to remove queued messages.
If clients and server processes are on the same machine, I think named pipes will give you the fastest raw byte transfer rate.
If the processes are across different machines, you'd need to use sockets-based approach.
Remoting is reportedly very slow. Based on the target OSes that you're planning to deploy the solution on, you could have options like WCF et al. However, the overhead of these protocols is something you may want to look at while deciding.
Remoting
If all development is done in .NET 2005, Remoting is the best way to go.
MSMQ would make some sense, though there are then security and deployment considerations. You could look at a service bus (such s NServiceBus or MassTransit) and there's also SQL Server Service Broker that could help (and can also be used by a service bus as the transport).
WCF would be another thing to look at, however that's really the across-network transport, so you'd probably still want the WCF calls to put a message on the server queue.
I don't recommend remoting, because it's hard to maintain a separation of concerns, and before you know it you're developing a really chatty interface without realising it. Remote calls are expensive in relative terms, so you should be trying to keep the messages fairly coarse-grained. WCF would be my recommendation. Not least because you can set it up to use a HTTP transport and avoid a lot of deployment and security headache.
The .NET Framework provides several ways to communicate with objects in different application domains, each designed with a particular level of expertise and flexibility in mind. For example, the growth of the Internet has made XML Web services an attractive method of communication, because XML Web services are built on the common infrastructure of the HTTP protocol and SOAP formatting, which uses XML. These are public standards, and can be used immediately with current Web infrastructures without worrying about additional proxy or firewall issues.
Not all applications should be built using some form of XML Web service, however, if only because of the performance issues related to using SOAP serialization over an HTTP connection.
Choosing Communication Options in .NET helps you decide which form of interobject communication you want for your application.
I am interested in using a free library that has features similar to MSMQ to send/receive messages among 3 app domains in a win form application.
I only need the private queue functionality (No public queues or AD support)
Please provide links and some advantages/disadvantages . I am happy to open sub questions if you think you need more points for finer details.
Note: Unfortunately I have some users that do not have Windows XP professional edition (MSMQ is not available)
I saw Apache ActiveMQ and rabbit MQ but it seems a bit overkill for what I need to do.
http://activemq.apache.org/
http://www.rabbitmq.com/
It is possible to implement this feature using a singleton Queue protected by a named mutex, but I would not like to spend the time if somebody has already done it.
There is Rhino Queues. The author is considered to be a pretty good developer.
How about NServiceBus using the shared memory transport? The creator, Udi Dahan, is a well respected individual in the message based architecture space.
If it's all in the same application then sharing a synchronized queue is what you want, have a look at the Queue.Synchronized method in MSDN, that provides you with a thread-safe queue.
At some point, there is going to have to be some client specific code to accept messages. If the users need to accept messages on their machines, it sounds like a smart client situation. In the Windows world, there is a smart client which does messaging, and allows users to work with data in a disconnected way.
I can't imagine any one library which will allow messaging on different operating systems. Even if a singleton is used, there has to be some cross-platform way to send/receive the messages. It seems like the client end would always have to be OS specific.
It might be possible to try Mono on the non-windows side. There is a tool you can use to see if a third party library has dependencies which will not run in Mono. It was released with the Mono tools for Visual Studio. It is called the Mono Migration Analyzer (MoMA).
See also this system:
http://www.codeproject.com/Articles/193611/DotNetMQ-A-Complete-Message-Queue-System-for-NET
DotNetMQ is an open source Message Broker that has several features:
Persistent or non-persistent messaging.
Guaranteed delivery of persistent messages even in a system crash.
Automatic and manual routing of messages in a custom machine graph.
Supports multiple databases (MS SQL Server, MySQL, SQLite, and memory-based storage for now).
Supports don’t store, direct send style messaging.
Supports Request/Reply style messaging.
Easy to use client library to communicate with the DotNetMQ Message Broker.
Built-in framework to easily construct RMI services upon message queues.
Supports delivering messages to ASP.NET Web Services.
GUI-based management and monitoring tool.
Easy to install, manage, and use.
You might want to look at Retlang http://code.google.com/p/retlang/