ActiveMQ NMS consumer (C#) can't able to receive old messages: My C# program will be in a while loop operating on message received.
I'm establishing a NMS consumer connection each time when I need a message and operate on the message received.
The problem is whenever I start the program the messages posted after my programs 1st connection attempt, I can get them downloaded/consumed.
However, if no messages are flowing in and I have some old messages sitting before I establish 1st connection, those messages are not getting consumed. I used proper connection.start(). and I'm using consumer.receive(0) 0 - waittime.
NNS consumer example contains the following line:
// Consume a message
ITextMessage message = consumer.Receive() as ITextMessage;
When running this code it may look as if consumer is returning null (but it doesn't).
Problem here is that consumer.Receive() returns an IMessage which is not always of type ITextMessage. In my case it was returning ActiveMQBytesMessage which is completely different type, and converting it to ITextMessage in as ITextMessage returns null.
The following code would be more appropriate as an example:
// Consume a message
IMessage message = consumer.Receive();
Any time you call a consumer receive with no timeout you are running the risk of not getting a message and must be prepared for that. The consumer doesn't query the broker for a message on a call to receive so if there hasn't been a message dispatched or it's still in flight then the receive will return null message.
Creating a new connection on each attempt to get a message is really an anti-pattern and you should consider using a long lived connection / consumer to avoid this situation more, although you can't completely mitigate this as you are doing a receive(0).
Related
I'm getting the following exception when trying to respond to a RabbitMQ exclusive queue using Rebus.
- e {"Queue 'xxxx-xxxx' does not exist"} Rebus.Exceptions.RebusApplicationException
+ InnerException {"The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=405, text=\"RESOURCE_LOCKED - cannot obtain exclusive access to locked queue 'xxxx-xxxx' in vhost '/'. It could be originally declared on another connection or the exclusive property value does not match that of the original d...\", classId=50, methodId=10, cause="} System.Exception {RabbitMQ.Client.Exceptions.OperationInterruptedException}
The client declares the queue as exclusive and is able to successfully send the message to the server. The server processes the message but throws the exception when sending the response.
I can see in the Rebus source code (Rebus.RabbitMq.RabbitMqTransport.cs) that it attempts a model.QueueDeclarePassive(queueName) which throws the above exception.
I found the following statement Here
RabbitMQ extends the exclusivity to queue.declare (including passive declare), queue.bind, queue.unbind, queue.purge, queue.delete, basic.consume, and basic.get
Modifying the Rebus source to simply return true from the CheckQueueExistence method allows the response message to be sent. So my question is, is this an issue in Rebus with the use of the passive declare on an exclusive queue, is RabbitMQ blocking the call, or is there a fundamental concept I'm missing?
The reason Rebus does the model.QueueDeclarePassive(queueName) thing, is because it tries to help you by verifying the existence of the destination queue before sending to it.
This is to avoid the situation where a sender goes
using var bus = Configure.With(...)
.(...)
.Routing(r => r.TypeBased().Map<string>("does-not-exist"))
.Start();
await bus.Send("I'M LOST 😱");
and the message is lost.
The problem here is that RabbitMQ still uses a routing key to match the sent message to a binding pointing towards a queue, and if no matching binding exists (even when using the DIRECT exchange type) the message is simply routed to 0 queues, and thus it is lost.
If you submit a PR that makes it configurable whether to check that destination queues exist, then I'd be happy to (help you get it right and) accept it.
Basically the title... I'd like to have same feedback on weather NamedPipeServerStream object successfully received a value. This is the starting code:
static void Main(string[] args){
Console.WriteLine("Client running!");
NamedPipeClientStream npc = new NamedPipeClientStream("somename");
npc.Connect();
// npc.WriteTimeout = 1000; does not work, says it is not supported for this stream
byte[] message = Encoding.UTF8.GetBytes("Message");
npc.Write(message);
int response = npc.ReadByte();
Console.WriteLine("response; "+response);
}
I've implemented a small echo message from the NamedPipeServerStream on every read. I imagine I could add some async timeout to check if npc.ReadByte(); did return a value in lets say 200ms. Similar to how TCP packets are ACKed.
Is there a better way of inspecting if namedPipeClientStream.Write() was successful?
I'd like to have same feedback on weather NamedPipeServerStream object successfully received a value
The only way to know for sure that the data you sent was received and successfully processed by the client at the remote endpoint, is for your own application protocol to include such acknowledgements.
As a general rule, you can assume that if your send operations are completing successfully, the connection remains viable and the remote endpoint is getting the data. If something happens to the connection, you'll eventually get an error while sending data.
However, this assumption only goes so far. Network I/O is buffered, usually at several levels. Any of your send operations almost certainly involve doing nothing more than placing the data in a local buffer for the network layer. The method call for the operation will return as soon as the data has been buffered, without regard for whether the remote endpoint has received it (and in fact, almost never will have by the time your call returns).
So if and when such a call throws an exception or otherwise reports an error, it's entirely possible that some of the previously sent data has also been lost in transit.
How best to address this possibility depends on what you're trying to do. But in general, you should not worry about it at all. It will typically not matter if a specific transmission has been received. As long as you can continue transmitting without error, the connection is fine, and asking for acknowledgement is just unnecessary overhead.
If you want to handle the case where an error occurs, invalidating the connection, forcing you to retry, and you want to make the broader operation resumable (e.g. you're streaming some data to the remote endpoint and want to ensure all of the data has been received, without having to resend data that has already been received), then you should build into your application protocol the ability to resume, where on reconnecting the remote endpoint reports the number of bytes it's received so far, or the most recent message ID, or whatever it is your application protocol would need to understand where it needs to start sending again.
See also this very closely-related question (arguably maybe even an actual duplicate…though it doesn't mention named pipes specifically, pretty much all network I/O will involve similar issues):
Does TcpClient write method guarantees the data are delivered to server?
There's a good answer there, as well as links to even more useful Q&A in that answer.
I'd like to write parallel execution module based on Solace. And I use request-reply schema for this.
I have:
Multiple message consumers, which publish messages into the same queue.
Multiple message producers, which read queue and create reply messages.
Message execution time is between 10 seconds to 10 minutes.
Queue access type is non-exclusive (e.g. it does round-robin between all consumers).
Each producer and consumer is asynchronous, e.g. Solace API blocks execution during the connection only.
What I'd like to have: if produces works on the message, it should not receive any other messages. This is extremely important, because some tasks blocks executor for several minutes, however other executors can be free after couple of seconds.
Scheme below can be workable (possible), however blocking code appears below. I'd like to avoid it.
while(true)
{
var inputMessage = flow.ReceiveMsg( /*timeout 1s*/1_000); // <--- blocking code, I'd like to avoid it
flow.Ack(inputMessage.ADMessageId);
var reply = await ProcessMessageAsync(inputMessage); // execute plus handle exceptions
session.SendReply(inputMessage, reply)
}
Messages are only pushed to the consuming applications.
That being said, your desired behavior can be obtained by setting the "max-delivered-unacked-msgs-per-flow" on your queue to 1.
This means that each consumer bound to the queue is only allowed to have 1 outstanding unacknowledged messages.
The next message will be only sent to the consumer after it has acknowledged the message.
Details about this feature can be found here.
Do note that your code snippet does not appear to be valid.
IFlow.ReceiveMsg is only used in transacted sessions, which makes use of ITransactedSession.Commit to acknowledge messages.
I understand that RabbitMQ with ack, by default, will re-queue the message if it detects that the consumer/worker has died.
What about the situation where the consumer/worker is still alive but its process has stalled out for too long and didn't ack?
I would like to set an explicit time that says that if a message has been dispatched to a consumer but that consumer has held the message without ack for too long that the message gets re-queued.
I recognize that this might result in messages getting processed in duplicate but sometimes the consequence of that is not as bad as delayed message delivery.
It can also happen with errant exception handling if something get swallowed, the task terminates, and the message is never ack'd and never re-queued.
Timeout for RabbitMQ consumer could be explicitly set on the consumer side. I think this is clear but just to mention - there must not be any automatic ACKs in this case. The solution would be that the consumer is multithreaded with one thread doing message processing and ACKing the message only after it has been processed, and the other thread being a timeout thread that would:
terminate the connection to broker once the timeout has expired, and
as a consequence the message would be requeued
ACK the received message and re-publish it (explicitly)
NACK the received message, but based on the documentation (instructing the broker to either discard them or requeue them), it seems that some config should be set instructing the broker what should it do with NACKed messages
Now all this implies that at least some part of the process isn't stuck. If the whole process is stuck, perhaps the broker heartbeat towards the consumer is stopped and that is how the broker knows that the consumer died (honestly I didn't test this situation, so I'm assuming), but if this is not the case (or simply to be extra safe) you could add some kind of a watchdog process that would be pinging the consumer(s) and killing them if there's no reply, which again would lead to the messages not being ACKed and being requeued.
I am using WMQ to access an IBM WebSphere MQ on a mainframe - using c#.
We are considering spreading out our service on several machines, and we then need to make sure that two services on two different machines cannot read/get the same MQ message at the same time.
My code for getting messages is this:
var connectionProperties = new Hashtable();
const string transport = MQC.TRANSPORT_MQSERIES_CLIENT;
connectionProperties.Add(MQC.TRANSPORT_PROPERTY, transport);
connectionProperties.Add(MQC.HOST_NAME_PROPERTY, mqServerIP);
connectionProperties.Add(MQC.PORT_PROPERTY, mqServerPort);
connectionProperties.Add(MQC.CHANNEL_PROPERTY, mqChannelName);
_mqManager = new MQQueueManager(mqManagerName, connectionProperties);
var queue = _mqManager.AccessQueue(_queueName, MQC.MQOO_INPUT_SHARED + MQC.MQOO_FAIL_IF_QUIESCING);
var queueMessage = new MQMessage {Format = MQC.MQFMT_STRING};
var queueGetMessageOptions = new MQGetMessageOptions {Options = MQC.MQGMO_WAIT, WaitInterval = 2000};
queue.Get(queueMessage, queueGetMessageOptions);
queue.Close();
_mqManager.Commit();
return queueMessage.ReadString(queueMessage.MessageLength);
Is WebSphere MQ transactional by default, or is there something I need to change in my configuration to enable this?
Or - do I need to ask our mainframe guys to do some of their magic?
Thx
Unless you actively BROWSE the message (ie read it but leave it there with no locks), only one getter will ever be able to 'get' the message. Even without transactionality, MQ will still only deliver the message once... but once delivered its gone
MQ is not transactional 'by default' - you need to get with GMO_SYNCPOINT (MQ transactions) and commit at the connection (MQQueueManager level) if you want transactionality (or integrate with .net transactions is another option)
If you use syncpoint then one getter will get the message, the other will ignore it, but if you subsequently have an issue and rollback, then it is made available to any getter (as you would want). It is this scenario where you might see a message twice, but thats because you aborted the transaction and hence asked for it to be put back to how it was before the get.
I wish I'd found this sooner because the accepted answer is incomplete. MQ provides once and only once delivery of messages as described in the other answer and IBM's documentation. If you have many clients listening on the same queue, MQ will deliver only one copy of the message. This is uncontested.
That said, MQ, or any other async messaging for that matter, must deal with session handling and ambiguous outcomes. The affect of these factors is such that any async messaging application should be designed to gracefully handle dupe messages.
Consider an application putting a message onto a queue. If the PUT call receives a 2009 Connection Broken response, it is unclear whether the connection failed before or after the channel agent received and acted on the API call. The application, having no way to tell the difference, must put the message again to assure it is received. Doing the PUT under syncpoint can result in a 2009 on the COMMIT (or equivalent return code in messaging transports other than MQ) and the app doesn't know if the COMMIT was successful or if the PUT will eventually be rolled back. To be safe it must PUT the message again.
Now consider the partner application receiving the messages. A GET issued outside of syncpoint that reaches the channel agent will permanently remove the message from the queue, even if the channel agent cannot then deliver it. So use of transacted sessions ensures that the message is not lost. But suppose that the message has been received and processed and the COMMIT returns a 2009 Connection Broken. The app has no way to know whether the message was removed during the COMMIT or will be rolled back and delivered again. At the very least the app can avoid losing messages by using transacted sessions to retrieve them, but can not guarantee to never receive a dupe.
This is of course endemic to all async messaging, not just MQ, which is why the JMS specification directly address it. The situation is addressed in all versions but in the JMS 1.1 spec look in section 4.4.13 Duplicate Production of Messages which states:
If a failure occurs between the time a client commits its work on a
Session and the commit method returns, the client cannot determine if
the transaction was committed or rolled back. The same ambiguity
exists when a failure occurs between the non-transactional send of a
PERSISTENT message and the return from the sending method.
It is up to a JMS application to deal with this ambiguity. In some
cases, this may cause a client to produce functionally duplicate
messages.
A message that is redelivered due to session recovery is not
considered a duplicate message.
If it is critical that the application receive one and only one copy of the message, use 2-Phase transactions. The transaction manager and XA protocol will provide very strong (but still not absolute) assurance that only one copy of the message will be processed by the application.
The behavior of the messaging transport in delivering one and only one copy of a given message is a measure of the reliability of the transport. By contrast, the behavior of an application which relies on receipt of one and only one copy of the message is a measure of the reliability of the application.
Any duplicate messages received from an IBM MQ transport are almost certainly going to be due to the application's failure to use XA to account for the ambiguous outcomes inherent in async messaging and not a defect in MQ. Please keep this in mind when the Production version of the application chokes on its first duplicate message.
On a related note, if Disaster Recovery is involved, the app must also gracefully recover from lost messages, or else find a way to violate the laws of relativity.