I am using ZMQ NetMQ package in c# to receive the message from the subscriber. I am able to receive the msg but I am sticking in the while loop. I want to break the while loop if the publisher is stopped sending data.
Here is my subscriber code:
using (var subscriber = new SubscriberSocket())
{
subscriber.Connect("tcp://127.0.0.1:4000");
subscriber.Subscribe("A");
while (true)
{
var msg = subscriber.ReceiveFrameString();
Console.WriteLine(msg);
}
Q : "How to check ZMQ publisher is alive or not in c# ?"
A :There are at least two ways to do so :
a )modify the code on both the PUB-side and SUB-side, so that the Publisher sends both the PUB/SUB-channel messages, and independently of that also PUSH/PULL-keep-alive messages to prove to the SUB-side it is still alive, as being autonomously received as confirmations from the PULL-AccessPoint on the SUB-side loop. Not receiving such soft-keep-alive message for some time may trigger the SUB-side loop to become sure to break. The same principle may get served by a reversed PUSH/PULL-channel, where SUB-side, from time to time, asks the PUB-side, listening on the PULL-side, using asynchronously sent soft-request message to inject a soft-keep-alive message into the PUB-channel ( remember the TOPIC-filter is a plain ASCII-filtering from the left to the right of the message-payload, so PUSH-delivered message could as easily send the exact text to be looped-back via PUB/SUB back to the sender, matching the locally known TOPIC-filter maintained by the very same SUB-side entity )
b )in cases, where you cannot modify the PUB-side code, we still can setup a time-based counter, after expiring which, without receiving a single message ( be it using a loop of a known multiple of precisely timed-aSUB.poll( ... )-s, which allows for a few, priority-ordered interleaved control-loops to be operated without uncontrolled mutual blocking, or by using a straight, non-blocking form of aSUB.recv( zmq.NOBLOCK ) aligned within the loop with some busy-loop avoiding, CPU-relieving sleep()-s ). In case such timeout happens, having received no actual message so far, we can autonomously break the SUB-side loop, as requested above.
Q.E.D.
Related
I use Apache NMS (in c#) to receive messages from ActiveMQ.
I want to be able to acknowledge every message I received, or roll back a message in case I had an error.
I solved the first part by using the CreateSession(AcknowledgementMode.IndividualAcknowledge), and then for every received message I use message.Acknowledge().
The problem is that in this mode there is no Rollback option. if the message is not acknowledged - I can never receive it again for another trial. It can only be sent to another consumer, but there isn't another consumer so it is just stucked in queue.
So I tried to use AcknowledgementMode.Transactional instead, but here there is another problem: I can only use session.Commit() or session.Rollback(), but there is no way to know which specific message I commit or role back.
What is the correct way to do this?
Stay with INDIVIDUAL_ACKNOWLEDGE and then try session.recover() and session.close(). Both of those should signal to the broker that the messages are not going to be acknowledged.
My solution to this was to throw an exception if (for any reason (exception from db savechanges event for example)) I did not want to acknowledge the message with message.Acknowledge().
When you throw an exception inside your extended method of IMessageConsumer Listener then the message will be sent again to your consumer for about 5 times (it will then moved to default DLQ queue for investigation).
However you can change this using the RedeliveryPolicy in connection object.
Example of Redelivery
Policy redeliveryPolicy = new RedeliveryPolicy
{
InitialRedeliveryDelay = 5000, // every 5 secs
MaximumRedeliveries = 10, // the message will be redelivered 10 times
UseCollisionAvoidance = true, // use this to random calculate the 5 secs
CollisionAvoidancePercent = 50,// used along with above option
UseExponentialBackOff = false
};
If message fails again (after 10 times) then it will be moved to a default DLQ queue. (this queue will be created automatically)
You can use this queue to investigate the messages that have not been acknowledged using an other consumer.
I think that the question says it all really.
For a bit of background, I have a subscriber that I am trying to write some tests for. In order to do that, I spin up a publisher from within the test specifying tcp://localhost:[port] as the address. When a message is sent, the subscriber doesn't receive it. Here is some sample code to demonstrate:
string address = "tcp://localhost:1026";
// string address = "inproc://localhost1026";
var pubSocket = new PublisherSocket();
pubSocket.Bind(address);
var subSocket = new SubscriberSocket();
subSocket.Connect(address);
subSocket.SubscribeToAnyTopic();
pubSocket.SendFrame("Hello world!", false);
Console.WriteLine(subSocket.ReceiveFrameString()); /* <-- tcp transport
waits here forever
*/
subSocket.Dispose();
pubSocket.Dispose();
If I change the protocol to inproc:// then all is well. I don't want to do this in my tests, however, because I also want to test a monitor socket and this doesn't raise events for inproc:// connections (as far as I can see).
Note that I am using NetMQ from C# code (running under .NET Framework 4.6.2).
Can ZeroMQ (NetMQ) TCP transport be used between publisher and subscriber in the same process ?
Absolutely,there is no restriction preventing one from doing this, yet . . .
Your observation is related to the latency of the hidden processing, that takes place inside the main engine ... inside the ZeroMQ Context() instance.
Things do not happen in zero-time.
Well, one may opt to postpone this one right after the Pieter HINTJENS' one, the "Code Connected, Volume 1" on ZeroMQ Zen-of-Zero. That both make sense, a lot of sense, not only here ).
So, while the inproc:// transport-class has (almost)-zero-resources, being a pure private-"in-process" memory-mapped abstraction ( sure, except perhaps a few sub-[ns]-"devices" like a semaphore/lock ), the ZeroMQ infrastructure gets up and running in an "immediate"-fashion, the tcp:// does not have this comfort, as it has to first generate all the transport-class specific contracts with O/S, with device-driver(s), instantiate transport-class specific ISO/OSI-{ L0, L1, L3, L3+ }-processing policies, instantiate respective data-pumping code into the Context()'s RTO-state, allocate and map memory buffer(s) for serving these purposes, so pretty lot of work to be done, before the PUB-side gets into the RTO-state, where it ( under newer versions of ~ API 4.+ ) has also the duty to both receive and process the subscription service-telemetry, as it bears the concentrated responsibility for per-SUB-client TOPIC-filterlist processing.
This is why it resulted in hanging .recv( ..., ZMQ_BLOCK ) burried inside the NetMQ wrapped abstraction of subSocket.ReceiveFrameString().
To test it, just make the modified test:
// --------------------------------------------------- // DEMO PSEUDO-CODE
string rF = "";
while True:
pubSocket.SendFrame( "Hello world!", false ); // keep sending ...
// also may count++
// so as to "show" how
// many loops it took
rF = subSocket.ReceiveFrameString( false ); // non-blocking mode ~
// .recv( ZMQ_NOBLOCK )
// or may use Poll()
// to just sniff for
// a message presence
if rF == "":
continue; // ---^ LOOP NEXT, AS DID .recv() NOTHING YET
break; // ----------v BREAK AS DID .recv() MESSAGE
// ----------------------------------------------------------------------
Console.WriteLine( rF );
One may carry more efforts and experiment here to see, the key roles of both the .connect()-related overheads but also the need not to miss the SUB-side signalled telemetry on setting the subscription(s) + the need to receive it + reprocess it on the PUB-side, before any message gets first ever at all dispatched towards the intended, otherwise a just "forever"-waiting, SUB
A "fat"-enough .sleep( someGuestimateTIME ) proposed already by #HesamFaridmehr, after the SUB-side has both .connect()-ed plus it's .setsockopt( ZMQ_SUBSCRIBE ) ( that has to get first delivered and also processed in a due fashion on the PUB-side ( to configure the TOPIC-filter list-processor processor properly ) ) all that well-enough before the first PUB.send() will mask the root-cause by making it "indirectly" blocked and the code-execution flow stops, instead of making the solution smart-enough - using a non-blocking form of Poll() for example - for a professional distributed-system design best-practices, where one can indeed but obey the assembly hackers beloved first line present macro #ASSUME NOTHING;.
Because pub/sub pattern is like radio. Publisher won't wait utill subscriber connects it will ignore sending if there is no subscriber. you can test that just by adding Thread.Sleep(1000); after subSocket.SubscribeToAnyTopic(); line and you will see that you will receive the message.
In inproc, time for binding is less than tcp that's why your receiving the message
And in inproc, publisher should be up before subscriber connects
I've been asked to write a method that will allow a caller to send a command string to a hardware device via the serial port. After sending the command the method must wait for a response from the device, which it then returns to the caller.
To complicate things the hardware device periodically sends unsolicited packets of data to the PC (data that the app must store for reporting). So when I send a serial command, I may receive one or more data packets before receiving the command response.
Other considerations: there may be multiple clients sending serial commands potentially at the same time as this method will form the basis of a WCF service. Also, the method needs to be synchronous (for reasons I won't go into here), so that rules out using a callback to return the response to the client.
Regarding the "multiple clients", I was planning to use a BlockingCollection<> to queue the incoming commands, with a background thread that executes the tasks one at a time, thus avoiding serial port contention.
However I'm not sure how to deal with the incoming serial data. My initial thoughts were to have another background thread that continually reads the serial port, storing data analysis packets, but also looking for command responses. When one is received the thread would somehow return the response data to the method that originally sent the serial command (which has been waiting ever since doing so - remember I have a stipulation that the method is synchronous).
It's this last bit I'm unsure of - how can I get my method to wait until the background thread has received the command's response? And how can I pass the response from the background thread to my waiting method, so it can return it to the caller? I'm new to threading so am I going about this the wrong way?
Thanks in advance
Andy
First of all: When you use the SerialPort class that comes with the framework, the data received event is asynchronous already. When you send something, data is coming in asynchronously.
What I'd try is: queue all requests that need to wait for an answer. In the overall receive handler, check whether the incoming data is the answer for one of the requests. If so, store the reply along with the request information (create some kind of state class for that). All other incoming data is handled normally.
So, how to make the requests wait for an answer? The call that is to send the command and return the reply would create the state object, queue it and also monitor the object to see whether an answer was received. If an answer was received, the call returns the result.
A possible outline could be:
string SendAndWait(string command)
{
StateObject state = new StateObject(command);
state.ReplyReceived = new ManualResetEvent(false);
try
{
SerialPortHandler.Instance.SendRequest(command, state);
state.ReplyReceived.WaitOne();
}
finally
{
state.ReplyReceived.Close();
}
return state.Reply;
}
What's SerialPortHandler? I'd make this a singleton class which contains an Instance property to access the singleton instance. This class does all the serial port stuff. It should also contain an event that is raised when "out of band" information comes in (data that is not a reply to a command).
It also contains the SendRequest method which sends the command to the serial device, stores the state object in an internal list, waits for the command's reply to come in and updates the state object with the reply.
The state object contains a wait handle called ReplyReceived which is set by the SerialPortHandler after it has changed the state object's Reply property. That way you don't need a loop and Thread.Sleep. Also, instead of calling WaitOne() you could call WaitOne(timeout) with timeout being a number of milliseconds to wait for the reply to come in. This way you could implement some kind of timeout-feature.
This is how it could look in SerialPortHandler:
void HandlePossibleCommandReply(string reply)
{
StateObject state = FindStateObjectForReply(reply);
if (state != null)
{
state.Reply = reply;
state.ReplyReceived.Set();
m_internalStateList.Remove(state);
}
}
Please note: This is what I'd try to start with. I'm sure this can be very much optimized, but as you see there's not much "multithreading" involved where - only the SendAndWait method should be called in a way so that multiple clients can issue commands while another client is still waiting for its response.
EDIT
Another note: You're saying that the method should form the basis for a WCF service. This makes things easier, as if you configure the service right, a instance of the service class will be created for every call to the service, so the SendAndWait method would "live" in its own instance of the service and doesn't even need to be re-entrant at all. In that case, you just need to make sure that the SerialPortHandler is always active (=> is created and running independently from the actual WCF service), no matter whether there's currently an instance of your service class at all.
EDIT 2
I changed my sample code to not loop and sleep as suggested in the comments.
If you really want to block until the background thread has received your command response, you could look into having the background thread lock an object when you enqueue your command and return that to you. Next, you wait for the lock and continue:
// in main code:
var locker = mySerialManager.Enquee(command);
lock (locker)
{
// this will only be executed, when mySerialManager unlocks the lock
}
// in SerialManager
public object Enqueue(object command)
{
var locker = new Object();
Monitor.Enter(locker);
// NOTE: Monitor.Exit() gets called when command result
// arrives on serial port
EnqueueCommand(command, locker);
return locker;
}
A couple things. You need to be able to tie up serial responses to the commands that requested them. I assume that there's some index or sequence number that goes out with the command and comes back in the response?
Given that, you should be OK. You need some sort of 'serialAPU' class to represent the request and response. I don't know what these are, maybe just strings, I don't know. The class should have an autoResetEvent as well. Anyway, in your 'DoSerialProtocol()' function, create a serialAPU, load it up with request data, queue it off to the serial thread and wait on the autoResetEvent. When the thread gets the serialAPU, it can store an index/sequence number in the serialAPU, store the serialAPU in a vector and send off the request.
When data comes in, do you protocol stuff and, if the data is a valid response, get the index/sequence from the data and look up the matching value in the serialAPU's in the vector. Remove the matching serialAPU from the vector, load it up with the response data and signal the autoResetEvent. The thread that called 'DoSerialProtocol()' originally will then run on and can handle the response data.
There are lots of 'wiggles' of course. Timeouts is one. I would be tempted to have a state enum in the serialAPU, protected by a CritcalSection or atomicCompareandSwap, initialized ot 'Esubmitted'. If the oringinating thread times out its wait on the autoResetEvent, it tries to set the state enum in its serialAPU to 'EtimedOut'. If it succeeds, fine, it returns an error to the caller. Simlarly, in the serial thread, if it finds a serialAPU whose state is EtimedOut, it just removes it from the container. If it finds the serialAPU that matches response data, it tries to change the state to 'EdataRx' and if it succeeds. fires the autoRestEvent.
Another is the annoying OOB data. If that comes in, create a serialAPU, load in the OOB data, set the state to 'EOOBdata' and call some 'OOBevent' with it.
I would advise you to look at the BackgroundWorker-Class
Ther is a Event in this class (RunWorkerCompleted) which is fired when the worker has finished his job.
I have a group of "Packets" which are custom classed that are coverted to byte[] and then sent to the client. When a client joins, they are updated with the previous "Catch Up Packets" that were sent previous to the user joining. Think of it as a chat room where you are updated with the previous conversations.
My issue is on the client end, we do not receive all the information; Sometimes not at all..
Below is pseudo c# code for what I see
code looks like this.
lock(CatchUpQueue.SyncRoot)
{
foreach(Packet packet in CatchUpQueue)
{
// If I put Console.WriteLine("I am Sending Packets"); It will work fine up to (2) client sockets else if fails again.
clientSocket.BeginSend(data, 0, data.length, SocketFlags.None, new AsyncCallback(EndSend), data);
}
}
Is this some sort of throttle issue or an issue with sending to many times: ie: if there are 4 packets in the queue then it calls begin send 4 times.
I have searched for a topic similiar and I cannot find one. Thank you for your help.
Edit: I would also like to point out that the sending between clients continues normally for any sends after the client connects. But for some reason the packets within this for loop are not all sent.
I would suspect that you are flooding the TCP port with packets, and probably overflowing its send buffer, at which point it will probably return errors rather than sending the data.
The idea of Async I/O is not to allow you to send an infinite amount of data packets simultaneously, but to allow your foreground thread to continue processing while a linear sequence of one or more I/O operations occurs in the background.
As the TCP stream is a serial stream, try respecting that and send each packet in turn. That is, after BeginSend, use the Async callback to detect when the Send has completed before you send again. You are effectively doing this by adding a Sleep, but this is not a very good solution (you will either be sending packets more slowly than possible, or you may not sleep for long enough and packets will be lost again)
Or, if you don't need the I/O to run in the background, use your simple foreach loop, but use a synchronous rather than Async send.
Okay,
Apparently a fix, so far still has me confused, is to Thread.Sleep for the number of ms for each packet I am sending.
So...
for(int i = 0; i < PacketQueue.Count; i++)
{
Packet packet = PacketQueue[i];
clientSocket.BeginSend(data, 0, data.length, SocketFlags.None, new AsyncCallback(EndSend), data);
Thread.Sleep(PacketQueue.Count);
}
I assume that for some reason the loop stops some of the calls from happening... Well I will continue to work with this and try to find the real answer.
My original question from a while ago is MSMQ Slow Queue Reading, however I have advanced from that and now think I know the problem a bit more clearer.
My code (well actually part of an open source library I am using) looks like this:
queue.Receive(TimeSpan.FromSeconds(10), MessageQueueTransactionType.Automatic);
Which is using the Messaging.MessageQueue.Receive function and queue is a MessageQueue. The problem is as follows.
The above line of code will be called with the specified timeout (10 seconds). The Receive(...) function is a blocking function, and is supposed to block until a message arrives in the queue at which time it will return. If no message is received before the timeout is hit, it will return at the timeout. If a message is in the queue when the function is called, it will return that message immediately.
However, what is happening is the Receive(...) function is being called, seeing that there is no message in the queue, and hence waiting for a new message to come in. When a new message comes in (before the timeout), it isn't detecting this new message and continues waiting. The timeout is eventually hit, at which point the code continues and calls Receive(...) again, where it picks up the message and processes it.
Now, this problem only occurs after a number of days/weeks. I can make it work normally again by deleting & recreating the queue. It happens on different computers, and different queues. So it seems like something is building up, until some point when it breaks the triggering/notification ability that the Receive(...) function uses.
I've checked a lot of different things, and everything seems normal & isn't different from a queue that is working normally. There is plenty of disk space (13gig free) and RAM (about 350MB free out of 1GB from what I can tell). I have checked registry entries which all appear the same as other queues, and the performance monitor doesn't show anything out of the normal. I have also run the TMQ tool and can't see anything noticably wrong from that.
I am using Windows XP on all the machines and they all have service pack 3 installed. I am not sending a large amount of messages to the queues, at most it would be 1 every 2 seconds but generally a lot less frequent than that. The messages are only small too and nowhere near the 4MB limit.
The only thing I have just noticed is the p0000001.mq and r0000067.mq files in C:\WINDOWS\system32\msmq\storage are both 4,096KB however they are that size on other computers also which are not currently experiencing the problem. The problem does not happen to every queue on the computer at once, as I can recreate 1 problem queue on the computer and the other queues still experience the problem.
I am not very experienced with MSMQ so if you post possible things to check can you please explain how to check them or where I can find more details on what you are talking about.
Currently the situation is:
ComputerA - 4 queues normal
ComputerB - 2 queues experiencing problem, 1 queue normal
ComputerC - 2 queues experiencing problem
ComputerD - 1 queue normal
ComputerE - 2 queues normal
So I have a large number of computers/queues to compare and test against.
Any particular reason you aren't using an event handler to listen to the queues? The System.Messaging library allows you to attach a handler to a queue instead of, if I understand what you are doing correctly, looping Receive every 10 seconds. Try something like this:
class MSMQListener
{
public void StartListening(string queuePath)
{
MessageQueue msQueue = new MessageQueue(queuePath);
msQueue.ReceiveCompleted += QueueMessageReceived;
msQueue.BeginReceive();
}
private void QueueMessageReceived(object source, ReceiveCompletedEventArgs args)
{
MessageQueue msQueue = (MessageQueue)source;
//once a message is received, stop receiving
Message msMessage = null;
msMessage = msQueue.EndReceive(args.AsyncResult);
//do something with the message
//begin receiving again
msQueue.BeginReceive();
}
}
We are also using NServiceBus and had a similar problem inside our network.
Basically, MSMQ is using UDP with two-phase commits. After a message is received, it has to be acknowledged. Until it is acknowledged, it cannot be received on the client side as the receive transaction hasn't been finalized.
This was caused by different things in different times for us:
once, this was due to the Distributed Transaction Coordinator unable to communicate between machines as firewall misconfiguration
another time, we were using cloned virtual machines without sysprep which made internal MSMQ ids non-unique and made it receive a message to one machine and ack to another. Eventually, MSMQ figures things out but it takes quite a while.
Try this
public Message Receive( TimeSpan timeout, Cursor cursor )
overloaded function.
To get a cursor for a MessageQueue, call the CreateCursor method for that queue.
A Cursor is used with such methods as Peek(TimeSpan, Cursor, PeekAction) and Receive(TimeSpan, Cursor) when you need to read messages that are not at the front of the queue. This includes reading messages synchronously or asynchronously. Cursors do not need to be used to read only the first message in a queue.
When reading messages within a transaction, Message Queuing does not roll back cursor movement if the transaction is aborted. For example, suppose there is a queue with two messages, A1 and A2. If you remove message A1 while in a transaction, Message Queuing moves the cursor to message A2. However, if the transaction is aborted for any reason, message A1 is inserted back into the queue but the cursor remains pointing at message A2.
To close the cursor, call Close.
If you want to use something completely synchronous and without event you can test this method
public object Receive(string path, int millisecondsTimeout)
{
var mq = new System.Messaging.MessageQueue(path);
var asyncResult = mq.BeginReceive();
var handles = new System.Threading.WaitHandle[] { asyncResult.AsyncWaitHandle };
var index = System.Threading.WaitHandle.WaitAny(handles, millisecondsTimeout);
if (index == 258) // Timeout
{
mq.Close();
return null;
}
var result = mq.EndReceive(asyncResult);
return result;
}