Peek into Message Queue using Rebus - c#

I'm trying to run a maintainance task. For this, I want to push a message on application start. When the actual handler runs, it does it's work and defers it to execute again a day later. I don't want to publish this message, if it's already on the queue. Is there a way to peek into the queue? I'm using the SQL transport, so I tried to simply query the DB. However, the table is locked and cannot be read. One other thing to consider is, that there are at least two machines running the same app. This is why I came up with this solutions, since I want to circumvent concurrency issues.

It sounds to me like you're using the message queue as a scheduler.
If I were you, I would use a simple background timer (e.g. System.Timers.Timer) to periodically send a message to yourself, and then you can do your work in the message handler.
If your scheduling requirements are more complex, it might be beneficial to take a look at something like Quartz .NET.

Related

Hangfire with multiple servers

I am working on an application where I have multiple servers on different machines doing long operations for me. There is a windows service running on those machines written with hangfire/topshelf. Only one operation can run at a time per machine. Additionally I want to do some status check and cleaning jobs periodically on each server, so I can't just queue them as jobs.
Is there a way to do that in hangfire? Also, is there a way to send a follow-up job to the same server as an earlier job?
ADD-ON: I know one possibility would be to add another hangfire layer: Make each of the services a hangfire client with own DB and serve themselves, and then schedule recurring jobs for them, but that seems awfully complicated - especially when scaling out and adding servers.
If your task is to run some scheduled task on each server, I think, the best option is to implement it yourself, Hangfire don't support events handling, only command handling. I think, you reached the point of Hangfire possibilities and need to switch to more powerful and general tool.
For events and their handling you can use other systems, for example RabbitMQ. You just specify event generator and subscribe all your machines for this event.
I know this is a bit late, but the way we handle this sort of thing is just to write a simple console application and schedule it with Windows Task Scheduler.
You've probably resolved this by now, but
1 - one job per server - as you have it - worker count - probably the best as you can have multiple queues per server and the filters won't help you there.
2 - should the cleanup run after each processing job?
if yes, you can create the cleanup job from within your process job execution (ok maybe not perfect design but it works just fine) and assign to a queue on the same server, just add some logic in filters to ensure processing job is followed by a cleanup job and you're sorted.
alternatively you can use Continuation jobs (as on the site https://www.hangfire.io/) - Haven't used these but sounds like it might do the trick.
if you just want to periodically run the cleanup code then just schedule the job as recurring on each of the servers

RabbitMq -> Distributed Work Queue with confirms work has completed

Does RabbitMQ (called from a c# client) have the ability to distribute work on a queue and allow the publisher to receive confirmations that that work processed successfully?
It seems like it should be possible without adding an extra queue, but unless I'm missing something acknowledgements/confirms don't tell the original publisher that a message was dealt with successfully. So it has no way of knowing if all of its work was handled.
I'm currently using the standard rabbit c# client, but I know easynetq is also very mature so suggestions for a good way to achieve this with either would be appreciated.
No, absolutely nothing in RabbitMQ will do that. The most you get out of RabbitMQ is an acknowledgment that the message was delivered to a worker, which you may interpret as "someone started work on the task". Your workers will need to find a way to communicate back to the caller with the results of the task, which could well be another exchange-queue mechanism but it is more likely that your workers will put the results of the task in Redis or database and if properly written, even a way to communicate failure codes via the same.

Azure servicebus queue message handling

I have two consumers (different applications) connected to an Azure queue. I can either ReceiveAndDelete or PeekLock the messages and during consumption I can complete() or abandon() the message. Ref: http://msdn.microsoft.com/en-us/library/azure/hh851750.aspx.
I'm sure I want to use PeekLock and then abandon() the messages, as I want them to be received in both applications. I figured I'd set the message lifetime on 10 seconds on the queue as a deletion mechanism.
However, as the messages seem to be deleted after 10 seconds, they keep being published in both applications over and over again during those 10 seconds. Should I create some custom duplication detection or am I using a wrong approach in general?
In my experience, when you use PeeKLock, you will need to almost always finalize using the Complete Method. The Abandon method is creating the duplicity as it's never marked as "done".
Have you considered using the Service Bus Topics/Subscriptions pattern, or perhaps Filters? If I understand your scenario correctly, it may be just what you need. You can send 2 messages to the queue with different topics or filters designating which app it is for.
http://azure.microsoft.com/en-us/documentation/articles/service-bus-dotnet-how-to-use-topics-subscriptions/
Please let me know if this helps, or if this does not match your situation.
Kindest regards...

IBM-MQ reader in .net XMS to ack processed messages one-by-one

I am implementing a component that reads all the messages off a specific queue as they are available but should only remove messages from the queue asynchronously, after the message contents has been processed and persisted. We read messages off faster than we acknowledge them (e.g. we could read have read 10 messages off before we are ready to Ack the first). The current implementation uses the XMS API, but we can switch to MQI if XMS is inappropriate for these purposes.
We have tried two approaches to try solve this problem, but both have drawbacks that make them unacceptable. I was hoping that someone could suggest a better way.
The first implementation uses an IMessageConsumer in a dedicated thread to read all messages and publish their content as they are available. When the message has been processed, the message.Acknowledge() method is called. The Session is created with AcknowledgeMode.ClientAcknowledge. The problem with this approach is that, as per the documentation, this acknowledges (and deletes) ALL unacknowledged messages that have been received. With the example above, that would mean that all 10 read messages would be acked with the first call. So this does not really solve the problem. Because of the reading throughput we need, we cannot really modify this solution to wait for the first message's ack before reading the second, etc.
The second implementation uses an IQueueBrowser in a decided thread to read all messages and publish their content. This does not delete the messages off the queue as it reads. A separate dedicated thread then waits (on a BlockingQueue) for JMS Message IDs of messages that have been processed. For each of these, it then constructs a dedicated IMessageConsumer (using a message selector with JMSMessageID) to read off the message and ack it. (This pairing of an IQueueBrowser with dedicated IMessageConsumer is recommend by the XMS documentation's section on Queue browsers.) This method does work as expected but, as one would imagine, it is too CPU-intensive on the MQ Server.
Both of the methods proposed in the question appear to rely on a single instance of the app. What's wrong with using multiple app instances, transacted sessions and COMMIT? The performance reports (these are the SupportPacs with names like MP**) all show that throughput is maximized with multiple app instances, and horizontal scaling is one of the most used approaches in your scenario.
The design for this would be either multiple application instances or multiple threads within the same application. The key to making it work correctly is to keep in mind that transactions are scoped to a connection handle. The implication is that a multi-threaded app must dispatch a separate thread for each connection instance and the messages are read in the same thread.
The process flow is that, using a transacted session, the app performs a normal MQGet against the queue, processes the message contents as required and then issues an MQCommit. (I'll use the MQ native API names in my examples because this isn't language dependent.) If this is an XA transaction the app would call MQBegin to start the transaction but for single-phase commit the transaction is assumed. In both cases, MQCommit ends the transaction which removes the messages from the queue. While messages are under syncpoint, no other app instance can retrieve them; MQ just delivers the next available message. If a transaction is rolled back, the next MQGet from any thread retrieves it, assuming FIFO delivery.
There are some samples in:
[WMQ install home]\tools\dotnet\samples\cs\xms\simple\wmq\
...and SimpleXAConsumer.cs is one example that shows the XA version of this. The non-XA version is simpler since you don't need the external coordinator, the MQBegin verbs and so forth. If you start with one of these, make sure that they do not specify exclusive use of the queue and you can fire up several instances using the same configuration. Alternatively, take the portion of the sample that includes creation of the connection, message handling, connection close and destroy, and wrap all that in a thread spawner class.
[Insert usual advice about using the latest version of the classes here.]

Should I have a thread that sleeps then calls a method?

I wondering would this work. I have a simple C# cmd line application. It sends out emails at a set time(through windows scheduler).
I am wondering if the smtp would say fail would this be a good idea?
In the smtpException I put thread that sleeps for say 15mins. When it wakes up it just calls that method again. This time hopefully the smtp would be back up. If not it would keep doing this until the smpt is back online.
Is some down side that I am missing about this? I would of course do some logging that this is happening.
This is not a bad idea, in fact what you are effectively implementing is a simple variation of the Circuit-Breaker pattern.
The idea behind the pattern is the fact that if an external resource is down, it will probably not come back up a few milliseconds later. It might need some time to recover. Typically the circuit breaker pattern is used as a mean to fail fast - so that the user can get an error sooner; or in order not to consume more resources on the failing system. When you have stuff that can be put in a queue, and does not require instant delivery, like you do, it is perfectly reasonable to wait around for the resource to become available again.
Some things to note though: You might want to have a maximum count of retries, before failing completely, and you might want to start off with a delay less than 15 minutes.
Exponential back-off is the common choice here I think. Like the strategy that TCP uses to try to make a connection: double the timeout on each failed attempt. Prevents your program from flooding the event log with repeated failure notifications before somebody notices that something is wrong. Which can take a while.
However, using the task scheduler certainly doesn't help. You really ought to reprogram it so your program isn't consuming machine resource needlessly. But using the ITaskService interface from .NET isn't that easy. Check out this project.
I would strongly recommend using a Windows Service. Long-running processes that run in the background, wait for long periods of time and need a controlled, logged, 'monitorable' lifetime: it's what Windows Services do.
Thread.Sleep would do the job, but if you want it to be interruptable from another thread or something else going on, I would recommend Monitor.Wait (MSDN ref). Then you can run your process in a thread created and managed by the Service, and if you need to stop/interrupt, you Monitor.Pulse on the same sync object and the thread will come back to life.
Also ref:
Best architecture for a 30 + hour query
Hope that helps!
Infinite loops are always a worry. You should set it up so that it will fail after N attempts, and you definitely should have some way to shut it down from the user console.
Failure is not such a bad thing when the failure isn't yours. Let it fail and report why it failed.
Your choices are limited. Assuming that it is just a temporary condition and that it has worked at some point. The only thing you can do is notify of a problem, get somebody to fix it and then retry the operation later. The only thing you need to do is safeguard the messages so that you do not lose any.
If you stick with what you got watchout for concurrency, perhaps a named mutex to ensure only a single process is running at a time.
I send out Notifications to all our developers in a similar fashion. Only, I store the message body and subject in the database. After a message has been successfully processed then I set a success flag in the database. This way its easy to track and report errors and retries are a cakewalk.

Categories