What open source message queuing software provides durability with strict ordering? - c#

What we need is RabbitMQ that actually works as a queue and doesn't do this. Messages should stay at the head of a queue untill client dequeues them explicitly.
It seems like a very straightforward scenario, but for some reason I can't find any broker to support it.. A broker should run on Windows OS.

Apache Qpid is probably your best option. Of all of the message queues, this one has a number of interesting things going for it, including strict ordering.

If it is only one message that is the problem, why not write it to a file (and flush the file) before you process the message. After acking the message, delete the file.
And if you are concerned about the message broker crashing, first step is to upgrade it to RabbitMQ 2.4.1 running on Erlang R14B02. Second step is to cluster it so that you have multiple servers acting as the MQ broker. And only then, change your app to track the messages that have been processed, either by timestamp or by saving message IDs. Then, if RabbitMQ requeues a message, you will already have it and will process it and remember it. When it comes around a second time you will ignore it.
You may need to set prefetch to 0 for this to work right.
And there is another alternative too. You could consider writing your own RabbitMQ plugin to provide the exact behaviour that you need. Erlang may look complex at first sight, but it really isn't that hard to learn for an experienced programmer who has already learned a few languages. In particular, if you have anyone with functional programming experience in languages like Haskell or CAML, they will quickly pick up enough Erlang to do the job.
Because of Erlang's internal model of message-passing processes, RabbitMQ plugins can essentially do anything that they want. There is no specific limited plugin API that they need to conform to.
In other words, if RabbitMQ only does 99% of what you need, consider yourself lucky that with a small amount of work, you can leverage that 99% and achieve everything that you need. But in order to do this you have to get away from the idea that RabbitMQ is yet another package that you install with your system's package installation tools. In cases like yours RabbitMQ should be considered to be a mission critical tool, and you should install Erlang and RabbitMQ from source, and configure them to your needs without letting your OS limit you.

RabbitMQ also supports strict ordering as of release 2.7.0 and so should be an option for your scenario again.

Related

What does WaitForAck do in SignalR

I am considering using SignalR for server-to-client real time communication. However, I need to guarantee delivery, so I need some form of ACK in the process.
I have seen answers here with suggestions for how to do this, but I also see that the Microsoft documentation for SignalR includes a Message.WaitForAck bool property. This makes me hopeful that perhaps Microsoft baked something in to do this--but I can find no postings at all of folks using this, nor any posts explaining what it does.
Is it just an inert flag? That is, are we still on the hook to roll our own ACK system?
Thanks.
WaitForAck is an internal thing. SignalR is build around a MessageBus where WaitForAck is used for some operations that should block until completed (or timed out). An example of such operation would be adding connection to a group.
If you want a guarantee delivery you need to implement it on your own on top of SignalR.

How to run C# Task Parallel Library across multiple machines (like a render farm)?

I'm writing a calculation intensive program in C# using the TPL. Some preliminary benchmarking shows good reduction in computation time through using processors with more cores/threads.
However, there is a limit to how many threads are available on a single CPU (I think even the best Xeons money can buy is currently have about 16).
I've been reading about how render farms with a 'grid' of multiple inexpensive CPUs in their own machines is a good way to increase the overall core count, but I have no idea how I go about implementing one of these. Is it implemented at the OS level with Microsoft server technology (and if so, how?), or do I also need to modify the C# code itself?
Any help or links to existing information would be greatly appreciated.
If you want to do this at scale (100s of nodes) then developing your own system is hard. You have to handle; nodes becoming unavailable, data replication to each node, tracking job progress.. It's a long list. You also need to consider the sort of communication you're going to require between your nodes. Remember that the cost of sending a message (data) from one thread to another is tiny compared to the cost of sending it to another machine across a network (even a fast one). You may have to completely rewrite your multithreaded application to run well on a distributed system, even to the point of using a completely different algorithm.
Hadoop
Microsoft had plans to commercialize Dryad as LINQ to HPC but this project was sidelined a while back (I worked on this project before I left Microsoft). I believe you can still get the final "public preview", but it's unsupported. The SQL team opted to work with the Hadoop/Hortonworks people on getting a Windows/Azure/.NET friendly Hadoop distribution off the ground. As far as I know the only thing they have delivered is HDInsight. A Hadoop service running in Azure.
There is now a Microsoft .NET SDK For Hadoop which will allow you to manage a cluster and submit jobs etc. It does not seem to allow you to write code that executes on the Hadoop nodes. You can however use the Hadoop streaming API. This is fairly low level but is language agnostic so you can pretty much use it to integrate map reduce code written in any language with Hadoop. More details on this can be found in this blog post.
Hadoop for .NET Developers
If you want to do this as a smaller scale (10s of nodes) then I would look for something like MPI .NET. it looks like this project has been abandoned but something similar is probably what you want.
You might look into some like Dryad - http://research.microsoft.com/en-us/projects/dryadlinq/default.aspx
It might on the other hand also be a big too much for your situation, but the ideas in Dryad could be simplified to your needs.
You might also look into making your own TaskScheduler, which could handle the distribution of threads to agents running on other boxes, but you would have to implement a simple socket client/server communication to get and push the data.
Another and a bit odd suggestion, which might be okay for investigating things, is to do the following.
Let the master of the calculation cut the problem into the number of available client computers.
Write the parameters to kick of the calculation for each client to a file shared by all on the network.
Let the clients look for files dedicated to them, and kick of the calculation for their piece, when file appears. The output is written back to a result file.
The server will sit an listen for all clients completing their jobs.
The files could be replaced with a database, low-level sockets, REST services, Web Services etc. depending on your needs.

Write custom events that can be used by 3rd party applications

Is it possible to write custom events that can be handled by 3rd party applications?
We have an existing app and we're finding that many people that use the app are using sql triggers to custom-write functionality of their own certain when things happen in our app.
This has led to some instances where our own processes have slowed down due to shoddy 3rd party Triggers that block our app.
I was thinking we could make this easier for 3rd party devs if we could raise events that they could handle in their own services or apps instead of having to use triggers.
That way we'd lose the blocking because we can just fire the event and continue. Also their slowness/potential crashes would happen outside of our process.
A) is this a reasonable approach?
B) Is this possible? Can I scope an event beyond the scope of my app?
EDIT
I have since found other related questions to be of interest:
wcf cross application communication
Interprocess pubsub without network dependency
Listen for events in another application (This seems very close to what I'm after)
I guess I'm looking for the simplest approach but if we wanted to adopt this method across a number of other apps within our company we'd have some further challenges:
We have some older apps in vb6 and delphi - from those I'd just like to be able to listen for their events in my (or 3rd party) newer C# apps or services.
For now, I'll look at:
Managed Spy and http://pubsub.codeplex.com
No, events are only usable by code that's loaded into your own process. If you don't trust these people now, you really don't want to expose yourself to shoddy code that you load into your own process and throws unhandled exceptions that terminate your app. You'll get the phone call, not them. Besides, they'll use such an event to run code that slows down your app.
In general, anything you do with a dbase will run with an entirely unpredictable amount of overhead. Not just because of triggers added by others, the dbase server could simply be bogged down by other work and naturally slow down over time as it stores more and more data. Make sure that doesn't make your app difficult or unpleasant to use, dbase operations typically must run on a worker thread or be done asynchronously with, say, BeginExecuteXxxx(). And make it obvious in your UI that progress is stalled by the dbase server, not by any code that you wrote. Saves you from having to do a lot of explaining.
What you're looking to do is basically to send messages to other processes. For this, you need some sort of IPC mechanism. Since it sounds like you'll have multiple listeners to each message, a mailslot is probably the best way. Unfortunately, .NET doesn't have built-in support for mailslots, so you'll have to use P/Invoke.
If you're looking for a built-in solution, then you could use named pipes, WCF, .NET Remoting, or bare TCP or UDP. With any of these, though, you'll have to loop through all of your listeners and send the message one at a time to each of them, which is not that big of a deal, but maintaining the separate connections is a little more of a hassle.
Note that with WCF and .NET Remoting, you're pretty much limiting your clients to using .NET as well. If your clients might be native or some other platform, then mailslots, named pipes, and TCP/IP are your best bet.

Is it overkill to use a service bus if all messages are sent locally?

I have a mail reading service that reads every email from an inbox, parses it and inserts it into a database. The issue I'm running into is that there is no guarantee that I will be parsing the emails in order they were received (this is a business requirement). My fix for this would be to introduce some sort of queueing system. This way I would process the items in order they came in. This would also give me the benefit of decoupling my reading of the emails and parsing/inserting them in the database.
So my question is is it overkill to use a service bus (such as NServiceBus) if I only plan on sending messages locally? Meaning that the service that would be reading emails and the service that parses/inserts emails in the database would reside on the same machine.
Thank you.
Yes, this is clearly overkill, especially since NServiceBus doesn't guarantee that messages are delivered in order.
You can just use a Queue<T>, assuming you know how to get the messages out in order (this appears to be where you are having trouble, not that you are or aren't using a queue or whatever; you have to know how to get the items into the queue in the right order to begin with).
KISS and YAGNI apply here, all day, every day.
I would just us an MSMQ for your persistence issues. Once it's in, it's guaranteed to be there, regardless of the machine losing power, or some other application crashing.
The would word I dont't like. In my opinion: make your system as much flexible as it possible, without affecting limits of acceptable performance of your application (that only you may know).
In general: be prepared to worst marketing decision you can think of.
It depends. For your application, I agree with Jason, a service bus will not help you process messages in order any more than a local data structure will. And, as Jason said, it will most likely be more difficult considering the order of messages in a service bus is not guaranteed.
However, sending messages locally with a service bus can be very useful. It makes it very easy to send messages to other processes asynchronously. Since the consumer of the message is in a different process, you don't really have any threading concerns. Messages can be durable so you don't have to worry about something being missed, and it's very easy to add additional processing for a message after-the-fact by just adding a new subscriber. As an extra bonus, if the system ever becomes too big to run comfortable on one machine, it would be trivial to distribute the bus.
For your solution, it is unnecessary and might even cause issues. But there are cases where it makes sense to use a service bus locally.
This is the kind of job where ZeroMQ works well, and the side benefit to you is that you learn how to use a tool which can be used with other languages and on other platforms as well.

What is the simplest way to asynchronously communicate between C++ and C# applications

I have a C++ application that needs to communicate to a C# application (a windows service) running on the same machine. I want the C++ application to be able to write as many messages as it wants, without knowing or caring when/if the C# app is reading them, or even if it's running. The C# app be able to should just wake up every now and then and request the latest messages, even if the C++ app has been shut down.
What is the simplest way to achieve this? I think this kind of thing is what MSMQ is for, but I haven't found a good way to do it in C++. I'm using Named Pipes right now, but that's not really working out as the way I'm doing it requires a connection between the two apps, and the C++ call to WriteLine blocks until the read takes place.
Currently the best solution I can think of is just writing the messages to a file with a timestamp on each message that the C# application checks periodically against its last update timestamp. That seems a little crude, though.
What is the simplest way to achieve this sort of messaging?
I would use a named pipe.
Well, the simplest way actually is using a file to store the messages. I would suggest using an embedded database like SQLite, though: the advantage will be better performance and a nice way to query for changes (i.e. SELECT * FROM messages WHERE timestamp > last_app_start).
MSMQ definitely sounds like what you want, or the more basic reading and writing files written to a common area but then you need to watch contention on the files.
VC++ help on MSMQ.
The requirement of both apps not always running at the same time but still being able to message each other definitely means you need a third component to store/queue messages. Whether you use a shared database/file or you write a third app that acts as a message store is up to you. Either way you will find sharing always causes contention.
Personally I would look at 0MQ before MSMQ but neither will solve your problem as is. An sqlite database would be my first choice.

Categories