The consumer code below (not too far removed from the worker sample) throws System.IO.EndOfStreamException ("SharedQueue closed") after a couple of iterations of a single message being nacked.
public void Consume ()
{
using (var connection = connectionFactory.CreateConnection ()) {
using (var channel = connection.CreateModel ()) {
channel.QueueDeclare (queueName, true, false, false, null);
// 0= “Dont send me a new message untill I’ve finshed”,
// 1= “Send me one message at a time”
channel.BasicQos (0, 1, false);
var consumer = new QueueingBasicConsumer (channel);
channel.BasicConsume (queueName, false, consumer);
Console.WriteLine (" [*] Waiting for messages. " +
"To exit press CTRL+C");
while (true) {
BasicDeliverEventArgs ea;
try {
// block until a message can be dequeue
ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue ();
var body = ea.Body;
Console.WriteLine (" [x] Received, executing");
T thing = messageSerializer.Deserialize<T> (body);
try {
executor.DynamicInvoke (thing);
} catch {
channel.BasicNack (ea.DeliveryTag, false, true);
}
}
channel.BasicAck (ea.DeliveryTag, false);
} catch (OperationCanceledException) {
logger.Error ("Bugger");
}
}
}
}
I've read a few google results but AKAIK this typically happens when the stream has been closed due to manually acking when auto ack is set?
Thanks in advance.
Doh!. I guess I'll leave this question in place in the rare event anybody else is as stupid as I am. Note in the code above that the BasicNack will occur and execution will then continue on to the Nack operation. This is the (obvious) cause of the issue. I need spanking.
This Exception is thrown when the channel is closed.
It's happening to me too only when I am debuging my code after a couple of messages
So my guess is that we are getting timeout to the channel after a while.
Check your Rabbit configurations
Related
I have a TCP/IP server that is supposed to allow a connection to remain open as messages are sent across it. However, it seems that some clients open a new connection for each message, which causes the CPU usage to max out. I tried to fix this by adding a time-out but still seem to have the problem occasionally. I suspect that my solution was not the best choice, but I'm not sure what would be.
Below is my basic code with logging, error handling and processing removed.
private void StartListening()
{
try
{
_tcpListener = new TcpListener( IPAddress.Any, _settings.Port );
_tcpListener.Start();
while (DeviceState == State.Running)
{
var incomingConnection = _tcpListener.AcceptTcpClient();
var processThread = new Thread( ReceiveMessage );
processThread.Start( incomingConnection );
}
}
catch (Exception e)
{
// Unfortunately, a SocketException is expected when stopping AcceptTcpClient
if (DeviceState == State.Running) { throw; }
}
finally { _tcpListener?.Stop(); }
}
I believe the actual issue is that multiple process threads are being created, but are not being closed. Below is the code for ReceiveMessage.
private void ReceiveMessage( object IncomingConnection )
{
var buffer = new byte[_settings.BufferSize];
int bytesReceived = 0;
var messageData = String.Empty;
bool isConnected = true;
using (TcpClient connection = (TcpClient)IncomingConnection)
using (NetworkStream netStream = connection.GetStream())
{
netStream.ReadTimeout = 1000;
try
{
while (DeviceState == State.Running && isConnected)
{
// An IOException will be thrown and captured if no message comes in each second. This is the
// only way to send a signal to close the connection when shutting down. The exception is caught,
// and the connection is checked to confirm that it is still open. If it is, and the Router has
// not been shut down, the server will continue listening.
try { bytesReceived = netStream.Read( buffer, 0, buffer.Length ); }
catch (IOException e)
{
if (e.InnerException is SocketException se && se.SocketErrorCode == SocketError.TimedOut)
{
bytesReceived = 0;
if(GlobalSettings.IsLeaveConnectionOpen)
isConnected = GetConnectionState(connection);
else
isConnected = false;
}
else
throw;
}
if (bytesReceived > 0)
{
messageData += Encoding.UTF8.GetString( buffer, 0, bytesReceived );
string ack = ProcessMessage( messageData );
var writeBuffer = Encoding.UTF8.GetBytes( ack );
if (netStream.CanWrite) { netStream.Write( writeBuffer, 0, writeBuffer.Length ); }
messageData = String.Empty;
}
}
}
catch (Exception e) { ... }
finally { FileLogger.Log( "Closing the message stream.", Verbose.Debug, DeviceName ); }
}
}
For most clients the code is running correctly, but there are a few that seem to create a new connection for each message. I suspect that the issue lies around how I handle the IOException. For the systems that fail, the code does not seem to reach the finally statement until 30 seconds after the first message comes in, and each message creates a new ReceiveMessage thread. So the logs will show messages coming in, and 30 seconds in it will start to show multiple messages about the message stream being closed.
Below is how I check the connection, in case this is important.
public static bool GetConnectionState( TcpClient tcpClient )
{
var state = IPGlobalProperties.GetIPGlobalProperties()
.GetActiveTcpConnections()
.FirstOrDefault( x => x.LocalEndPoint.Equals( tcpClient.Client.LocalEndPoint )
&& x.RemoteEndPoint.Equals( tcpClient.Client.RemoteEndPoint ) );
return state != null ? state.State == TcpState.Established : false;
}
You're reinventing the wheel (in a worse way) at quite a few levels:
You're doing pseudo-blocking sockets. That combined with creating a whole new thread for every connection in an OS like Linux which doesn't have real threads can get expensive fast. Instead you should create a pure blocking socket with no read timeout (-1) and just listen on it. Unlike UDP, TCP will catch the connection being terminated by the client without you needing to poll for it.
And the reason why you seem to be doing the above is that you reinvent the standard Keep-Alive TCP mechanism. It's already written and works efficiently, simply use it. And as a bonus, the standard Keep-Alive mechanism is on the client side, not the server side, so even less processing for you.
Edit: And 3. You really need to cache the threads you so painstakingly created. The system thread pool won't suffice if you have that many long-term connections with a single socket communication per thread, but you can build your own expandable thread pool. You can also share multiple sockets on one thread using select, but that's going to change your logic quite a bit.
I am using Confluent.Kafka .NET client version 1.3.0. I am following the docs:
var consumerConfig = new ConsumerConfig
{
BootstrapServers = "server1, server2",
AutoOffsetReset = AutoOffsetReset.Earliest,
EnableAutoCommit = true,
EnableAutoOffsetStore = false,
GroupId = this.groupId,
SecurityProtocol = SecurityProtocol.SaslPlaintext,
SaslMechanism = SaslMechanism.Plain,
SaslUsername = this.kafkaUsername,
SaslPassword = this.kafkaPassword,
};
using (var consumer = new ConsumerBuilder<Ignore, string>(consumerConfig).Build())
{
var cancellationToken = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) =>
{
e.Cancel = true;
cancellationToken.Cancel();
};
consumer.Subscribe("my-topic");
while (true)
{
try
{
var consumerResult = consumer.Consume();
// process message
consumer.StoreOffset(consumerResult);
}
catch (ConsumeException e)
{
// log
}
catch (KafkaException e)
{
// log
}
catch (OperationCanceledException e)
{
// log
}
}
}
The problem is that even if I comment out the line consumer.StoreOffset(consumerResult);, I keep getting the next unconsumed message the next time I Consume, i.e. the offset keeps increasing which doesn't seem to be what the documentation claims it does, i.e. at least one delivery.
Even if I set EnableAutoCommit = false and remove 'EnableAutoOffsetStore = false' from the config, and replace consumer.StoreOffset(consumerResult) with consumer.Commit(), I still see the same behavior, i.e. even if I comment out the Commit, I still keep getting the next unconsumed messages.
I feel like I am missing something fundamental here, but can't figure what. Any help is appreciated!
You may want to have a re-try logic for processing each of your messages for a fixed number of times like say 5. If it doesn't succeed during these 5 retries, you may want to add this message to another topic for handling all failed messages which take precedence over your actual topic. Or you may want to add the failed message to the same topic so that it will be picked up later once all those other messages are consumed.
If the processing of any message is successful within those 5 retries, you can skip to the next message in the queue.
Sorry I can't add comment yet.
Kafka consumer consumes message in batchs, so maybe you still iterate through the batch pre-fetched by background thread.
You can check whether your consumer really commit offset or not using kafka util kafka-consumer-groups.sh
kafka-consumer-groups.sh --bootstrap-server kafka-host:9092 --group consumer_group --describe
I had the same situation, and here is my solution:
Set a configuration of max retries for each operation.
For consuming, just retry.
For Saving, re-assign the current offset, and then retry.
Here is the code:
var saveRetries = 0;
var consumeRetries = 0;
ConsumeResult<string, string> consumeResult;
while (true)
{
try
{
consumeResult = consumer.Consume();
consumeRetries = 0;
}
catch (ConsumeException e)
{
//Log and retry to consume, up to {MaxConsumeRetries} times
if (consumeRetries++ >= MaxConsumeRetries)
{
throw new OperationCanceledException($"Too many consume retries ({MaxConsumeRetries}). Please check configuration and run the service agian.");
}
continue;
}
catch (OperationCanceledException oe)
{
//Log
consumer.Close();
break;
}
try
{
SaveResult(consumeResult);
saveRetries = 0;
}
catch (ArgumentException ae)
{
//Log and retry to save, up to {MaxSaveRetries} times
if (saveRetries++ < MaxSaveRetries)
{
//Assign the same offset, and try again.
consumer.Assign(consumeResult.TopicPartitionOffset);
continue;
}
}
try
{
consumer.StoreOffset(consumeResult);
}
catch (KafkaException ke)
{
//Log and let it continue
}
}
Overview of Problem:
I need to connect to an IRC Server. Once connected, the program will send a message to the channel, and a response will occur over multiple lines back. I need to read these lines and store in a variable for later use. A special character at the end of the message (]) will define the end of the message over multiple lines. Once we have received this character, the IRC session should disconnect and processing should continue.
Situation:
I am using the Smartirc4net library. Calling irc.Disconnect() takes about 40 seconds to disconnect the session. Once we've received the ] character, the session should be disconnected, Listen() should not be blocking, and the rest of the program should continue to run.
Research:
I have found this: smartirc4net listens forever, can't exit thread, and I think it might be the same issue, however, I am unsure of what I need to do to resolve the problem.
Code:
public class IrcCommunicator
{
public IrcClient irc = new IrcClient();
string data;
public string Data { get { return data; } }
// this method we will use to analyse queries (also known as private messages)
public void OnQueryMessage(object sender, IrcEventArgs e)
{
data += e.Data.Message;
if (e.Data.Message.Contains("]"))
{
irc.Disconnect(); //THIS TAKES 40 SECONDS!!!
}
}
public void RunCommand()
{
irc.OnQueryMessage += new IrcEventHandler(OnQueryMessage);
string[] serverlist;
serverlist = new string[] { "127.0.0.1" };
int port = 6667;
string channel = "#test";
try
{
irc.Connect(serverlist, port);
}
catch (ConnectionException e)
{
// something went wrong, the reason will be shown
System.Console.WriteLine("couldn't connect! Reason: " + e.Message);
}
try
{
// here we logon and register our nickname and so on
irc.Login("test", "test");
// join the channel
irc.RfcJoin(channel);
irc.SendMessage(SendType.Message, "test", "!query");
// here we tell the IRC API to go into a receive mode, all events
// will be triggered by _this_ thread (main thread in this case)
// Listen() blocks by default, you can also use ListenOnce() if you
// need that does one IRC operation and then returns, so you need then
// an own loop
irc.Listen();
// when Listen() returns our IRC session is over, to be sure we call
// disconnect manually
irc.Disconnect();
}
catch (Exception e)
{
// this should not happen by just in case we handle it nicely
System.Console.WriteLine("Error occurred! Message: " + e.Message);
System.Console.WriteLine("Exception: " + e.StackTrace);
}
}
}
IrcBot bot = new IrcBot();
bot.RunCommand();
ViewBag.IRC = bot.Data;
As you can see, once this
Thank you for your time to look at this code and read my problem description. If you have any thoughts, or other suggestions, please let me know.
Mike
I was able to successfully disconnect straight away by calling RfcQuit() within OnQueryMessage(), before irc.Disconnect();
I use RabbitMQ as my queue message server, I use .NET C# client.
When there is error in processing message from queue, message will not ackknowleage and still stuck in queue not be processed again as the document I understand.
I don't know if I miss some configurations or block of codes.
My idea now is auto manual ack the message if error and manual push this message to queue again.
I hope to have another better solution.
Thank you so much.
my code
public void Subscribe(string queueName)
{
while (!Cancelled)
{
try
{
if (subscription == null)
{
try
{
//try to open connection
connection = connectionFactory.CreateConnection();
}
catch (BrokerUnreachableException ex)
{
//You probably want to log the error and cancel after N tries,
//otherwise start the loop over to try to connect again after a second or so.
log.Error(ex);
continue;
}
//crate chanel
channel = connection.CreateModel();
// This instructs the channel not to prefetch more than one message
channel.BasicQos(0, 1, false);
// Create a new, durable exchange
channel.ExchangeDeclare(exchangeName, ExchangeType.Direct, true, false, null);
// Create a new, durable queue
channel.QueueDeclare(queueName, true, false, false, null);
// Bind the queue to the exchange
channel.QueueBind(queueName, exchangeName, queueName);
//create subscription
subscription = new Subscription(channel, queueName, false);
}
BasicDeliverEventArgs eventArgs;
var gotMessage = subscription.Next(250, out eventArgs);//250 millisecond
if (gotMessage)
{
if (eventArgs == null)
{
//This means the connection is closed.
DisposeAllConnectionObjects();
continue;//move to new iterate
}
//process message
channel.BasicAck(eventArgs.DeliveryTag, false);
}
}
catch (OperationInterruptedException ex)
{
log.Error(ex);
DisposeAllConnectionObjects();
}
}
DisposeAllConnectionObjects();
}
private void DisposeAllConnectionObjects()
{
//dispose subscription
if (subscription != null)
{
//IDisposable is implemented explicitly for some reason.
((IDisposable)subscription).Dispose();
subscription = null;
}
//dipose channel
if (channel != null)
{
channel.Dispose();
channel = null;
}
//check if connection is not null and dispose it
if (connection != null)
{
try
{
connection.Dispose();
}
catch (EndOfStreamException ex)
{
log.Error(ex);
}
catch (OperationInterruptedException ex)//handle this get error from dispose connection
{
log.Error(ex);
}
catch (Exception ex)
{
log.Error(ex);
}
connection = null;
}
}
I think you may have misunderstood the RabbitMQ documentation. If a message does not get ack'ed from the consumer, the Rabbit broker will requeue the message onto the queue for consumption.
I don't believe your suggested method for ack'ing and then requeuing a message is a good idea, and will just make the problem more complex.
If you want to explicitly "reject" a message because the consumer had a problem processing it, you could use the Nack feature of Rabbit.
For example, within your catch exception blocks, you could use:
subscription.Model.BasicNack(eventArgs.DeliveryTag, false, true);
This will inform the Rabbit broker to requeue the message. Basically, you pass the delivery tag, false to say it is not multiple messages, and true to requeue the message.
If you want to reject the message and NOT requeue, just change true to false.
Additionally, you have created a subscription, so I think you should perform your ack's directly on this, not through the channel.
Change:
channel.BasicAck(eventArgs.DeliveryTag, false);
To:
subscription.Ack();
This method of ack'ing is much cleaner since you are then keeping everything subscription-related on the subscription object, rather than messing around with the channel that you've already subscribed to.
I have a very simple client that I want to be available 24/7 to consume messages. It is running in a Windows process.
I have no issues with the server and receiving messages, it is just the client.
The behavior is as follows:
Works if I start the connection fresh. After some time, perhaps hours, my client is in an odd state; the connection it contains 'holds' unacked messages.
In other words, using the web admin interface, I see that I have a total of, say, 2 unacked messages. Looking at my connections, I see the 2 unacked messages spread out.
But there is no processing going on.
And eventually, my connections get killed, with no exceptions or log messages being triggered. This puts all the messages into the ready state.
My first attempt to solve the problem was to add a simple external loop that checked the state of the i-vars of IModel, IChannel, and QueueingBasicConsumer. However, IModel/IChannel's IsOpen always reports true, even after the web admin reports no connections are active, and QueueingBasicConsumer's IsRunning always reports true as well.
Clearly I need another method to check whether a connection is 'active'.
So to summarize, things work well initially. Eventually, I get into an odd state where my diagnostic checks are meaningless, and messages sent to the server get unacked, and are spread out across any existing connections. Soon, my connections are killed with no debugs or exceptions thrown, and my diagnostic checks still report things are kosher.
Any help or best practices would be appreciated. I have read up on heartbeat, and the IsOpen 'race' condition, where it is suggested to use BasicQos and check for an exception, however I want to first understand what is happening.
Here is where I kick things off:
private void StartMessageLoop(string uri, string queueName) {
this.serverUri = uri;
this.queueName = queueName;
Connect(uri);
Task.Factory.StartNew(()=> MessageLoopTask(queueName));
}
Here is how I connect:
private void Connect(string serverAddress) {
ConnectionFactory cf = new ConnectionFactory();
cf.Uri = serverAddress;
this.connection = cf.CreateConnection();
this.connection.ConnectionShutdown += new ConnectionShutdownEventHandler(LogConnClose);
this.channel = this.connection.CreateModel();
}
Here is where the infinite loop starts:
private void MessageLoopTask(string queueName) {
consumer = new QueueingBasicConsumer(channel);
String consumerTag = channel.BasicConsume(queueName, false, consumer);
while (true) {
try {
BasicDeliverEventArgs e = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
IBasicProperties props = e.BasicProperties;
byte[] body = e.Body;
string messageContent = Encoding.UTF8.GetString(body);
bool result = this.messageProcessor.ProcessMessage(messageContent);
if(result){
channel.BasicAck(e.DeliveryTag, false);
}
else{
channel.BasicNack(e.DeliveryTag, false, true);
// log
}
}
catch (OperationInterruptedException ex) {
// log
break;
}
catch(Exception e) {
// log
break;
}
}
// log
}
Regards,
Dane