sending messages from one queue to other queues - c#

I'm using the code below to get all messages from a queue into an array and send to other queues also in an array, what's happening is it sends every message twice to every queue and I can't see why, can anyone see anything obvious?
thanks
public void SendToQs()
{
Code.Class1 c = new Code.Class1();
oInQueue = new MessageQueue(sInQueue);
Message[] msgs = oInQueue.GetAllMessages();
var queueArray = sOutQueues.Select(s => new MessageQueue(s)).ToArray();
foreach (Message msg in msgs)
{
foreach (MessageQueue s in queueArray)
{
c.WriteMessage(s, msg, msg.Label);
}
}
oInQueue.Purge();
}
WriteMessage:
public void WriteMessage(MessageQueue outputQueue, Message msg, string label)
{
if (!outputQueue.Transactional)
{
try
{
outputQueue.Send(msg, label);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
else
{
MessageQueueTransaction trans = new MessageQueueTransaction();
try
{
trans.Begin();
outputQueue.Send(msg, label, trans);
trans.Commit();
}
catch (Exception ex)
{
Console.WriteLine("message Q exception" + ex.Message);
trans.Abort();
}
}
}

Got it, and it was a daft as I was expecting!
In my void Main() I had originally kicked off a process just to make sure it worked.
I then added a line to start a new thread with this process, forgetting to take the original one out, so it was running twice.
DOH!

I have not had time to test this but you may want to consider this.
If a queue is having all it's messages sent to the other queues, then, when iterating through the list of queues - the original queue is also sent this message.
foreach (Message msg in msgs)
{
foreach (MessageQueue s in queueArray)
{
if (s.Id == oInQueue.Id) continue; // Skip if this is the originator
c.WriteMessage(s, msg, msg.Label);
}
}

Related

NetMQ.FiniteStateMachineException: Rep.XRecv - cannot receive another request

I constantly get NetMQ.FiniteStateMachineException
sure, my code works... the exception does not occur right away... but over the course of a few hours, probably it will happen.
Does anyone know what is going on here to cause this exception?
I'm not really sure why, even though I did read the explanation here: c# - ZeroMQ FiniteStateMachineException in REQ/REP pattern - Stack Overflow
I get a bunch of these:
the exception
NetMQ.FiniteStateMachineException: Rep.XRecv - cannot receive another request
at NetMQ.Core.Patterns.Rep.XRecv(Msg& msg)
at NetMQ.Core.SocketBase.TryRecv(Msg& msg, TimeSpan timeout)
at NetMQ.NetMQSocket.TryReceive(Msg& msg, TimeSpan timeout)
at NetMQ.ReceivingSocketExtensions.ReceiveFrameString(IReceivingSocket socket, Encoding encoding, Boolean& more)
at NinjaTrader.NinjaScript.AddOns.anAddOn.ZeroMQ_Server()
NetMQ.FiniteStateMachineException: Rep.XRecv - cannot receive another request
at NetMQ.Core.Patterns.Rep.XRecv(Msg& msg)
at NetMQ.Core.SocketBase.TryRecv(Msg& msg, TimeSpan timeout)
at NetMQ.NetMQSocket.TryReceive(Msg& msg, TimeSpan timeout)
at NetMQ.ReceivingSocketExtensions.ReceiveFrameString(IReceivingSocket socket, Encoding encoding, Boolean& more)
at NinjaTrader.NinjaScript.AddOns.anAddOn.ZeroMQ_Server()
my code
// thread start code
if (thread == null) {
print2("Addon {0}: is starting, listening on port: {1}...", GetType().Name, ZeroPort);
thread = new Thread(ZeroMQ_Server);
thread.Start();
}
// zeroMQ code
#region TaskCallBack - NetMQ
// This thread procedure performs the task.
private void ZeroMQ_Server()
{
bool quit = false;
string bindAddress = "tcp://*:"+ZeroPort;
try {
while (!quit) {
try {
using (var repSocket = new ResponseSocket())
{
curRepSocket = repSocket;
print2("*** BINDING on {0} ***", bindAddress);
repSocket.Bind(bindAddress);
while (!quit) {
try {
Running = true;
var msgStr = repSocket.ReceiveFrameString();
print2("[►] {2} [REP:{0}] {1}", bindAddress, msgStr, DateTime.Now.ToString("HH:mm:ss.fff"));
if (processMsg(msgStr)) {
StringBuilder csv = new StringBuilder();
// string building stuff here
string cs = csv.ToString();
print2("[◄] csv: {0}", cs);
repSocket.SendFrame(cs);
} else {
repSocket.SendFrame("Unrecognized Command: " + msgStr);
break;
}
} catch (Exception e) {
quit = isThreadAborted(e);
}
}
}
} catch (Exception e) {
if (e is AddressAlreadyInUseException) {
//print2("");
} else quit = isThreadAborted(e);
} finally {
curRepSocket = null;
Running = false;
}
}
} finally {
//NetMQConfig.Cleanup();
}
}
private bool isThreadAborted(Exception e) {
if (e is ThreadAbortException) {
print2("\n*** thread aborting... ***");
return true;
} else {
print2(e);
return false;
}
}
Response socket is a state machine, you must reply to each request.
From the code, it seems that if processMsg throws you don't send anything back, therefore you cannot receive again and get the exception.
It can also if Send failed if the client is gone.
Try to use router instead, like so:
while (true)
{
bool more;
var msg = routerSocket.ReceiveFrameBytes(out more);
// Forwarding the routing id.
routerSocket.SendMoreFrame(msg);
// Bottom, next frame is the message
if (msg.Length == 0)
break;
}
// Write your handling here

Queue problems across multiple threads

There are many questions and articles on the subject of using a .NET Queue properly within a multi threaded application, however I can't find subject on our specific problem.
We have a Windows Service that receives messages onto a queue via one thread and is then dequeued and processed within another.
We're using lock when queuing and dequeuing, and the service had run fine for around 2 years without any problems. One day we noticed that thousands of messages had been logged (and so had been queued) but were never dequeued/processed, they seem to have been skipped somehow, which shouldn't be possible for a queue.
We can't replicate the circumstances that caused it as we have no real idea what caused it considering that day was no different from any of the others as far as we're aware.
The only idea we have is to do with the concurrency of the queue. We're not using the ConcurrentQueue data-type, which we plan on using in the hope it is a remedy.
One idea, looking at the source of the Queue type, is that it uses arrays internally, which have to be resized once these buffers have reached a certain length. We hypothesised that when this is being done some of the messages were lost.
Another idea from our development manager is that using multiple threads on a multicore processor setup means that even though locks are used, the individual cores are working on the data in their local registers, which can cause them to be working on different data. He said they don't work on the same memory and seems to think lock only works as expected one a single core processor using multiple threads.
Reading more about ConcurrentQueue's use of volatile I'm not sure that this would help, as I've read that using lock provides a stronger guarantee of threads using the most up-to-date state of memory.
I don't have much knowledge on this specific subject, so my question is whether the manager's idea sounds plausible, and whether we might have missed something that's required for the queue to be used properly.
Code snippet for reference (forgive the messy code, it does need refactoring):
public sealed class Message
{
public void QueueMessage(long messageId, Message msg)
{
lock (_queueLock)
{
_queue.Enqueue(new QueuedMessage() { Id = messageId, Message = msg });
}
}
public static void QueueMessage(string queueProcessorName, long messageId, Message msg)
{
lock (_messageProcessors[queueProcessorName]._queueLock)
{
_messageProcessors[queueProcessorName].QueueMessage(messageId, msg);
_messageProcessors[queueProcessorName].WakeUp(); // Ensure the thread is awake
}
}
public void WakeUp()
{
lock(_monitor)
{
Monitor.Pulse(_monitor);
}
}
public void Process()
{
while (!_stop)
{
QueuedMessage currentMessage = null;
try
{
lock (_queueLock)
{
currentMessage = _queue.Dequeue();
}
}
catch(InvalidOperationException i)
{
// Nothing in the queue
}
while(currentMessage != null)
{
IContext context = new Context();
DAL.Message msg = null;
try
{
msg = context.Messages.SingleOrDefault(x => x.Id == currentMessage.Id);
}
catch (Exception e)
{
// TODO: Handle these exceptions better. Possible infinite loop.
continue; // Keep retrying until it works
}
if (msg == null) {
// TODO: Log missing message
continue;
}
try
{
msg.Status = DAL.Message.ProcessingState.Processing;
context.Commit();
}
catch (Exception e)
{
// TODO: Handle these exceptions better. Possible infinite loop.
continue; // Keep retrying until it works
}
bool result = false;
try {
Transformation.TransformManager mgr = Transformation.TransformManager.Instance();
Transformation.ITransform transform = mgr.GetTransform(currentMessage.Message.Type.Name, currentMessage.Message.Get("EVN:EventReasonCode"));
if (transform != null){
msg.BeginProcessing = DateTime.Now;
result = transform.Transform(currentMessage.Message);
msg.EndProcessing = DateTime.Now;
msg.Status = DAL.Message.ProcessingState.Complete;
}
else {
msg.Status = DAL.Message.ProcessingState.Failed;
}
context.Commit();
}
catch (Exception e)
{
try
{
context = new Context();
// TODO: Handle these exceptions better
Error err = context.Errors.Add(context.Errors.Create());
err.MessageId = currentMessage.Id;
if (currentMessage.Message != null)
{
err.EventReasonCode = currentMessage.Message.Get("EVN:EventReasonCode");
err.MessageType = currentMessage.Message.Type.Name;
}
else {
err.EventReasonCode = "Unknown";
err.MessageType = "Unknown";
}
StringBuilder sb = new StringBuilder("Exception occured\n");
int level = 0;
while (e != null && level < 10)
{
sb.Append("Message: ");
sb.Append(e.Message);
sb.Append("\nStack Trace: ");
sb.Append(e.StackTrace);
sb.Append("\n");
e = e.InnerException;
level++;
}
err.Text = sb.ToString();
}
catch (Exception ne) {
StringBuilder sb = new StringBuilder("Exception occured\n");
int level = 0;
while (ne != null && level < 10)
{
sb.Append("Message: ");
sb.Append(ne.Message);
sb.Append("\nStack Trace: ");
sb.Append(ne.StackTrace);
sb.Append("\n");
ne = ne.InnerException;
level++;
}
EventLog.WriteEntry("Service", sb.ToString(), EventLogEntryType.Error);
}
}
try
{
context.Commit();
lock (_queueLock)
{
currentMessage = _queue.Dequeue();
}
}
catch (InvalidOperationException e)
{
currentMessage = null; // No more messages in the queue
}
catch (Exception ne)
{
StringBuilder sb = new StringBuilder("Exception occured\n");
int level = 0;
while (ne != null && level < 10)
{
sb.Append("Message: ");
sb.Append(ne.Message);
sb.Append("\nStack Trace: ");
sb.Append(ne.StackTrace);
sb.Append("\n");
ne = ne.InnerException;
level++;
}
EventLog.WriteEntry("Service", sb.ToString(), EventLogEntryType.Error);
}
}
lock (_monitor)
{
if (_stop) break;
Monitor.Wait(_monitor, TimeSpan.FromMinutes(_pollingInterval));
if (_stop) break;
}
}
}
private object _monitor = new object();
private int _pollingInterval = 10;
private volatile bool _stop = false;
private object _queueLock = new object();
private Queue<QueuedMessage> _queue = new Queue<QueuedMessage>();
private static IDictionary<string, Message> _messageProcessors = new Dictionary<string, Message>();
}
so my question is whether the manager's idea sounds plausible
Uhm. No. If all those synchronization measures would only work on single core machines, the world would have ended in complete Chaos decades ago.
and whether we might have missed something that's required for the queue to be used properly.
As far as your description goes, you should be fine. I would look at how you found out that you have that problem. logs coming in but then vanishing without being properly dequeued, wouldn't that be the default case if I simply turned off the service or rebooted the machine? Are you sure you lost them while your application was actually running?
You declare the object to be used for the lock as private object.
If you try this:
class Program
{
static void Main(string[] args)
{
Test test1 = new Test();
Task Scan1 = Task.Run(() => test1.Run("1"));
Test test2 = new Test();
Task Scan2 = Task.Run(() => test2.Run("2"));
while(!Scan1.IsCompleted || !Scan2.IsCompleted)
{
Thread.Sleep(1000);
}
}
}
public class Test
{
private object _queueLock = new object();
public async Task Run(string val)
{
lock (_queueLock)
{
Console.WriteLine($"{val} locked");
Thread.Sleep(10000);
Console.WriteLine($"{val} unlocked");
}
}
}
You will notice that the code that lies under the lock is executed even if another thread is running inside.
But if you change
private object _queueLock = new object();
To
private static object _queueLock = new object();
It changes how your lock works.
Now, this being your issue depends on if you have multiple instances that class or everything is running withing that same class.

Xamarin.Forms (Android) Bluetooth intermittently working

Scenario:
I am building an Android app using Xamarin.Forms that will be deployed to a group of devices. All but one of the devices will be doing some data collection, and the remaining device will be the "hub" to aggregate all of the data and do some reporting. I am using Bluetooth for the device-to-device communication. The 'hub', labelled the master, acts as the client, and all of the collectors act as the server. I have a prototype working with a single server and client...almost.
Occasionally the client/master will be unable to read from the server/collector. I am struggling to find the reason for why this is and would appreciate any help.
Symptoms:
The client's call to .Read() from the InputStream will occasionally block indefinitely, even though the server has written to the output stream. I've added a timeout to this call to prevent the app from getting stuck entirely.
This happens intermittently, but I've found some pattern to when it works and when it doesn't
It seems to be related to the 'server' app, and not the client. The client can remain open, running, and initiate the request to connect to the server as often as needed.
It always works the first time the 'server' app is launched and connected to. It ususally works the second time. By the third connection, .Read() will consistently block/timeout. Closing and reopening the app on the server "cleans the slate" so to speak and it will work again.
Once it starts failing, it seems to be 'stuck' in a failed state.
Removing the app from the foreground (but not closing/killing it) seems to correct the faulted state, and the connection/read will happen successfully as long as the app/UI remains in the background. Once restored to the foreground, it starts failing again.
Code:
All of the bluetooth handling is done by a single class/service that I'm injecting using Xamarin.Forms DependencyService. All of the devices will, on startup (via the constructor of this class), loop indefinitely on a background thread, waiting for connections and repeating. Much of this bluetooth code is based on the Bluetooth Chat example, as well as some other online resources I've found (some android native/java, some Xamarin/C#)
The master will, on demand (triggered by press of a button in the UI), attempt to connect to any collectors (via bonded bluetooth devices) and read data from them. There is also a simple UI component which essentially serves as a console log.
Here is the service class in its entirety.
public class GameDataSyncService : IGameDataSyncService
{
private const string UUID = "8e99f5f1-4a07-4268-9686-3a288326e0a2";
private static Task acceptLoopTask;
private static Task syncDataTask;
private static readonly object locker = new object();
private static bool running = false;
public event EventHandler<DataSyncMessage> MessageBroadcast;
public GameDataSyncService()
{
// Every device will listen and accept incoming connections. The master will make the connections.
lock (locker)
{
if (acceptLoopTask == null)
{
acceptLoopTask = Task.Factory.StartNew(AcceptLoopWorker, TaskCreationOptions.LongRunning);
}
}
}
public void SyncData()
{
lock (locker)
{
if (running)
{
BroadcastMessage("Previous data sync is still running.", DataSyncMessageType.Warning);
return;
}
else
{
running = true;
syncDataTask = Task.Factory.StartNew(SyncDataWorker);
}
}
}
private void BroadcastMessage(string message, DataSyncMessageType type = DataSyncMessageType.Info)
{
MessageBroadcast?.Invoke(this, new DataSyncMessage { Text = message, Type = type });
}
private async Task AcceptLoopWorker()
{
int count = 0;
while (true)
{
BluetoothServerSocket serverSocket = null;
BluetoothSocket clientSocket = null;
try
{
BroadcastMessage($"Listening for incoming connection...", DataSyncMessageType.Debug);
serverSocket = BluetoothAdapter.DefaultAdapter.ListenUsingRfcommWithServiceRecord(nameof(GameDataSyncService), Java.Util.UUID.FromString(UUID));
clientSocket = serverSocket.Accept(); // This call blocks until a connection is established.
BroadcastMessage($"Connection received from {clientSocket.RemoteDevice.Name}. Sending data...", DataSyncMessageType.Info);
var bytes = Encoding.UTF8.GetBytes($"Hello World - {string.Join(" ", Enumerable.Repeat(Guid.NewGuid(), ++count))}");
await clientSocket.OutputStream.WriteAsync(bytes, 0, bytes.Length);
clientSocket.OutputStream.Flush();
// Give the master some time to close the connection from their end
await Task.Delay(1000*3);
}
catch (Exception ex)
{
BroadcastMessage($"{ex.GetType().FullName}: {ex.Message}", DataSyncMessageType.Debug);
}
finally
{
try { clientSocket?.InputStream?.Close(); } catch { }
try { clientSocket?.InputStream?.Dispose(); } catch { }
try { clientSocket?.OutputStream?.Close(); } catch { }
try { clientSocket?.OutputStream?.Dispose(); } catch { }
try { clientSocket?.Close(); } catch { }
try { clientSocket?.Dispose(); } catch { }
try { serverSocket?.Close(); } catch { }
try { serverSocket?.Dispose(); } catch { }
BroadcastMessage($"Connection closed.", DataSyncMessageType.Debug);
}
}
}
private async Task SyncDataWorker()
{
BroadcastMessage($"Beginning data sync...");
foreach (var bondedDevice in BluetoothAdapter.DefaultAdapter.BondedDevices.OrderBy(d => d.Name))
{
BluetoothSocket clientSocket = null;
try
{
clientSocket = bondedDevice.CreateRfcommSocketToServiceRecord(Java.Util.UUID.FromString(UUID));
BroadcastMessage($"Connecting to {bondedDevice.Name}...");
try
{
clientSocket.Connect();
}
catch
{
BroadcastMessage($"Connection to {bondedDevice.Name} failed.", DataSyncMessageType.Error);
}
while (clientSocket.IsConnected)
{
byte[] buffer = new byte[1024];
var readTask = clientSocket.InputStream.ReadAsync(buffer, 0, buffer.Length);
if (await Task.WhenAny(readTask, Task.Delay(1000)) != readTask)
{
BroadcastMessage($"Read timeout...", DataSyncMessageType.Error);
break;
}
int bytes = readTask.Result;
BroadcastMessage($"Read {bytes} bytes.", DataSyncMessageType.Success);
if (bytes > 0)
{
var text = Encoding.UTF8.GetString(buffer.Take(bytes).ToArray());
BroadcastMessage(text, DataSyncMessageType.Success);
break;
}
}
}
catch (Exception ex)
{
BroadcastMessage($"{ex.GetType().FullName}: {ex.Message}", DataSyncMessageType.Debug);
}
finally
{
try { clientSocket?.InputStream?.Close(); } catch { }
try { clientSocket?.InputStream?.Dispose(); } catch { }
try { clientSocket?.OutputStream?.Close(); } catch { }
try { clientSocket?.OutputStream?.Dispose(); } catch { }
try { clientSocket?.Close(); } catch { }
try { clientSocket?.Dispose(); } catch { }
}
}
await Task.Delay(1000 * 3);
BroadcastMessage($"Data sync complete!");
lock (locker)
{
running = false;
}
}
}
What I've tried (nothing below has had any effect):
Most of these were from 'solutions' from other stackoverflow posts.
Adding arbitrary delays into the mix
Making sure to explicitly close/dispose everything, in order, including the streams
Tried replacing the socket handling with their 'Insecure' counterparts.
Adjusting my read timeout to something arbitrarily long, in case a second wasn't enough.
Disabling/Re-enabling bluetooth on the server/collector before .Accept() ing a new connection (resorted to trying random stuff by this point)
Video:
I took a video of this happening.
The tablet in the back is the collector/server The tablet in the foreground is the master/client. When the video starts, the client is displaying some previous attempts, and the server app is in the background (but running). I demonstrate that the .Read works when the collector/server app is in the background, but not the foreground. Each request to begin data sync has a corresponding entry to the "console" (or a warning if I pressed it too soon)
https://youtu.be/NGuGa7upCU4
Summary:
To the best of my knowledge, my code is correct. I have no idea what else to change/fix to get this working more reliably. The actual connection seems like it is successful (based on logs from the server/collector, unfortunately not shown in the video), but the issue lies somewhere in the .Write (or .Read). ANy help, suggestions, or insight would be awesome.
Try the following, changed all to using:
private async Task AcceptLoopWorker()
{
int count = 0;
while (true)
{
try
{
BroadcastMessage("Listening for incoming connection...", DataSyncMessageType.Debug);
using (var serverSocket = BluetoothAdapter.DefaultAdapter.ListenUsingRfcommWithServiceRecord(nameof(GameDataSyncService), Java.Util.UUID.FromString(UUID)))
using (var clientSocket = serverSocket.Accept()) // This call blocks until a connection is established.
{
BroadcastMessage(string.Format("Connection received from {0}. Sending data...", clientSocket.RemoteDevice.Name), DataSyncMessageType.Info);
var bytes = System.Text.Encoding.UTF8.GetBytes(string.Format("Hello World - {0}", string.Join(" ", Enumerable.Repeat(Guid.NewGuid(), ++count))));
await clientSocket.OutputStream.WriteAsync(bytes, 0, bytes.Length);
}
await Task.Delay(1000 * 3); // Give the master some time to close the connection from their end
}
catch (Java.IO.IOException ex)
{
BroadcastMessage(string.Format("IOException {0}: {1}", ex.GetType().FullName, ex.Message), DataSyncMessageType.Debug);
}
catch (Java.Lang.Exception ex)
{
BroadcastMessage(string.Format("Exception {0}: {1}", ex.GetType().FullName, ex.Message), DataSyncMessageType.Debug);
}
}
}
private async Task SyncDataWorker()
{
BroadcastMessage("Beginning data sync...");
foreach (var bondedDevice in BluetoothAdapter.DefaultAdapter.BondedDevices.OrderBy(d => d.Name))
{
try
{
using (var clientSocket = bondedDevice.CreateRfcommSocketToServiceRecord(Java.Util.UUID.FromString(UUID)))
{
BroadcastMessage(string.Format("Connecting to {0}...", bondedDevice.Name));
if (!clientSocket.IsConnected)
{
clientSocket.Connect();
}
if (clientSocket.IsConnected)
{
byte[] buffer = new byte[1024];
var readTask = clientSocket.InputStream.ReadAsync(buffer, 0, buffer.Length);
if (await Task.WhenAny(readTask, Task.Delay(1000)) != readTask)
{
BroadcastMessage("Read timeout...", DataSyncMessageType.Error);
break;
}
int bytes = readTask.Result;
BroadcastMessage(string.Format("Read {0} bytes.", bytes), DataSyncMessageType.Success);
if (bytes > 0)
{
var text = System.Text.Encoding.UTF8.GetString(buffer.Take(bytes).ToArray());
BroadcastMessage(text, DataSyncMessageType.Success);
break;
}
}
else
{
BroadcastMessage("Not Connected...", DataSyncMessageType.Error);
}
}
}
catch (Java.IO.IOException ex)
{
BroadcastMessage(string.Format("IOException {0}: {1}", ex.GetType().FullName, ex.Message), DataSyncMessageType.Debug);
}
catch (Java.Lang.Exception ex)
{
BroadcastMessage(string.Format("Exception {0}: {1}", ex.GetType().FullName, ex.Message), DataSyncMessageType.Debug);
}
}
await Task.Delay(1000 * 3);
BroadcastMessage("Data sync complete!");
lock (locker)
{
running = false;
}
}

Redis failover with StackExchange / Sentinel from C#

We're currently using Redis 2.8.4 and StackExchange.Redis (and loving it) but don't have any sort of protection against hardware failures etc at the moment. I'm trying to get the solution working whereby we have master/slaves and sentinel monitoring but can't quite get there and I'm unable to find any real pointers after searching.
So currently we have got this far:
We have 3 redis servers and sentinel on each node (setup by the Linux guys):
devredis01:6383 (master)
devredis02:6383 (slave)
devredis03:6383 (slave)
devredis01:26379 (sentinel)
devredis02:26379 (sentinel)
devredis03:26379 (sentinel)
I am able to connect the StackExchange client to the redis servers and write/read and verify that the data is being replicated across all redis instances using Redis Desktop Manager.
I can also connect to the sentinel services using a different ConnectionMultiplexer, query the config, ask for master redis node, ask for slaves etc.
We can also kill the master redis node and verify that one of the slaves is promoted to master and replication to the other slave continues to work. We can observe the redis connection trying to reconnect to the master, and also if I recreate the ConnectionMultiplexer I can write/read again to the newly promoted master and read from the slave.
So far so good!
The bit I'm missing is how do you bring it all together in a production system?
Should I be getting the redis endpoints from sentinel and using 2 ConnectionMultiplexers?
What exactly do I have to do to detect that a node has gone down?
Can StackExchange do this for me automatically or does it pass an event so I can reconnect my redis ConnectionMultiplexer?
Should I handle the ConnectionFailed event and then reconnect in order for the ConnectionMuliplexer to find out what the new master is?
Presumably while I am reconnecting any attempts to write will be lost?
I hope I'm not missing something very obvious here I'm just struggling to put it all together.
Thanks in advance!
I was able to spend some time last week with the Linux guys testing scenarios and working on the C# side of this implementation and am using the following approach:
Read the sentinel addresses from config and create a ConnectionMultiplexer to connect to them
Subscribe to the +switch-master channel
Ask each sentinel server in turn what they think the master redis and slaves are, compare them all to make sure they all agree
Create a new ConnectionMultiplexer with the redis server addresses read from sentinel and connect, add event handler to ConnectionFailed and ConnectionRestored.
When I receive the +switch-master message I call Configure() on the redis ConnectionMultiplexer
As a belt and braces approach I always call Configure() on the redis ConnectionMultiplexer 12 seconds after receiving a connectionFailed or connectionRestored event when the connection type is ConnectionType.Interactive.
I find that generally I am working and reconfigured after about 5 seconds of losing the redis master. During this time I can't write but I can read (since you can read off a slave). 5 seconds is ok for us since our data updates very quickly and becomes stale after a few seconds (and is subsequently overwritten).
One thing I wasn't sure about was whether or not I should remove the redis server from the redis ConnectionMultiplexer when an instance goes down, or let it continue to retry the connection. I decided to leave it retrying as it comes back into the mix as a slave as soon as it comes back up. I did some performance testing with and without a connection being retried and it seemed to make little difference. Maybe someone can clarify whether this is the correct approach.
Every now and then bringing back an instance that was previously a master did seem to cause some confusion - a few seconds after it came back up I would receive an exception from writing - "READONLY" suggesting I can't write to a slave. This was rare but I found that my "catch-all" approach of calling Configure() 12 seconds after a connection state change caught this problem. Calling Configure() seems very cheap and therefore calling it twice regardless of whether or not it's necessary seemed OK.
Now that I have slaves I have offloaded some of my data cleanup code which does key scans to the slaves, which makes me happy.
All in all I'm pretty satisfied, it's not perfect but for something that should very rarely happen it's more than good enough.
I am including our Redis wrapper, it has changed somewhat from the original answer, for various reasons:
We wanted to use pub/sub
Sentinel didn't always appear to give us the master changed message at the 'right' time (i.e. it we called Configure() and ended up thinking a slave was a master)
The connectionMultiplexer didn't always seem to restore connctions every time, affecting pub/sub
I rather suspect this is down to our sentinel/redis configuration more than anything else. Either way, it just wasn't perfectly reliable despite destructive testing. Added to which, the master changed message took a long time since we had to increase timeouts due to sentinel being "too sensitive" and calling failovers when there weren't any. I think running in a virtual environment also exacerbated the problem.
Instead of listening to subscriptions now we simply attempt a write test every 5 seconds, and also have a "last message received" check for pub/sub. If we encounter any problems we strip down completely the connections and rebuild them. It seems overkill but actually it's pretty fast and still faster than waiting for the master changed message from sentinel...
This won't compile without various extension methods and other classes etc, but you get the idea.
namespace Smartodds.Framework.Redis
{
public class RedisClient : IDisposable
{
public RedisClient(RedisEnvironmentElement environment, Int32 databaseId)
{
m_ConnectTimeout = environment.ConnectTimeout;
m_Timeout = environment.Timeout;
m_DatabaseId = databaseId;
m_ReconnectTime = environment.ReconnectTime;
m_CheckSubscriptionsTime = environment.CheckSubscriptions;
if (environment.TestWrite == true)
{
m_CheckWriteTime = environment.TestWriteTime;
}
environment.Password.ToCharArray().ForEach((c) => m_Password.AppendChar(c));
foreach (var server in environment.Servers)
{
if (server.Type == ServerType.Redis)
{
// will be ignored if sentinel servers are used
m_RedisServers.Add(new RedisConnection { Address = server.Host, Port = server.Port });
}
else
{
m_SentinelServers.Add(new RedisConnection { Address = server.Host, Port = server.Port });
}
}
}
public bool IsSentinel { get { return m_SentinelServers.Count > 0; } }
public IDatabase Database { get { return _Redis.GetDatabase(m_DatabaseId); } }
private ConnectionMultiplexer _Redis
{
get
{
if (m_Connecting == true)
{
throw new RedisConnectionNotReadyException();
}
ConnectionMultiplexer redis = m_Redis;
if (redis == null)
{
throw new RedisConnectionNotReadyException();
}
return redis;
}
}
private ConnectionMultiplexer _Sentinel
{
get
{
if (m_Connecting == true)
{
throw new RedisConnectionNotReadyException("Sentinel connection not ready");
}
ConnectionMultiplexer sentinel = m_Sentinel;
if (sentinel == null)
{
throw new RedisConnectionNotReadyException("Sentinel connection not ready");
}
return sentinel;
}
}
public void RegisterSubscription(string channel, Action<RedisChannel, RedisValue> handler, Int32 maxNoReceiveSeconds)
{
m_Subscriptions.Add(channel, new RedisSubscription
{
Channel = channel,
Handler = handler,
MaxNoReceiveSeconds = maxNoReceiveSeconds,
LastUsed = DateTime.UtcNow,
});
}
public void Connect()
{
_Connect(true);
}
private void _Connect(object state)
{
bool throwException = (bool)state;
// if a reconnect is already being attempted, don't hang around waiting
if (Monitor.TryEnter(m_ConnectionLocker) == false)
{
return;
}
// we took the lock, notify everything we are connecting
m_Connecting = true;
try
{
Stopwatch sw = Stopwatch.StartNew();
LoggerQueue.Debug(">>>>>> REDIS CONNECTING... >>>>>>");
// if this is a reconnect, make absolutely sure everything is cleaned up first
_KillTimers();
_KillRedisClient();
if (this.IsSentinel == true && m_Sentinel == null)
{
LoggerQueue.Debug(">>>>>> CONNECTING TO SENTINEL >>>>>> - " + sw.Elapsed);
// we'll be getting the redis servers from sentinel
ConfigurationOptions sentinelConnection = _CreateRedisConfiguration(CommandMap.Sentinel, null, m_SentinelServers);
m_Sentinel = ConnectionMultiplexer.Connect(sentinelConnection);
LoggerQueue.Debug(">>>>>> CONNECTED TO SENTINEL >>>>>> - " + sw.Elapsed);
_OutputConfigurationFromSentinel();
// get all the redis servers from sentinel and ignore any set by caller
m_RedisServers.Clear();
m_RedisServers.AddRange(_GetAllRedisServersFromSentinel());
if (m_RedisServers.Count == 0)
{
throw new RedisException("Sentinel found no redis servers");
}
}
LoggerQueue.Debug(">>>>>> CONNECTING TO REDIS >>>>>> - " + sw.Elapsed);
// try to connect to all redis servers
ConfigurationOptions connection = _CreateRedisConfiguration(CommandMap.Default, _SecureStringToString(m_Password), m_RedisServers);
m_Redis = ConnectionMultiplexer.Connect(connection);
LoggerQueue.Debug(">>>>>> CONNECTED TO REDIS >>>>>> - " + sw.Elapsed);
// register subscription channels
m_Subscriptions.ForEach(s =>
{
m_Redis.GetSubscriber().Subscribe(s.Key, (channel, value) => _SubscriptionHandler(channel, value));
s.Value.LastUsed = DateTime.UtcNow;
});
if (this.IsSentinel == true)
{
// check subscriptions have been sending messages
if (m_Subscriptions.Count > 0)
{
m_CheckSubscriptionsTimer = new Timer(_CheckSubscriptions, null, 30000, m_CheckSubscriptionsTime);
}
if (m_CheckWriteTime != null)
{
// check that we can write to redis
m_CheckWriteTimer = new Timer(_CheckWrite, null, 32000, m_CheckWriteTime.Value);
}
// monitor for connection status change to any redis servers
m_Redis.ConnectionFailed += _ConnectionFailure;
m_Redis.ConnectionRestored += _ConnectionRestored;
}
LoggerQueue.Debug(string.Format(">>>>>> ALL REDIS CONNECTED ({0}) >>>>>>", sw.Elapsed));
}
catch (Exception ex)
{
LoggerQueue.Error(">>>>>> REDIS CONNECT FAILURE >>>>>>", ex);
if (throwException == true)
{
throw;
}
else
{
// internal reconnect, the reconnect has failed so might as well clean everything and try again
_KillTimers();
_KillRedisClient();
// faster than usual reconnect if failure
_ReconnectTimer(1000);
}
}
finally
{
// finished connection attempt, notify everything and remove lock
m_Connecting = false;
Monitor.Exit(m_ConnectionLocker);
}
}
private ConfigurationOptions _CreateRedisConfiguration(CommandMap commandMap, string password, List<RedisConnection> connections)
{
ConfigurationOptions connection = new ConfigurationOptions
{
CommandMap = commandMap,
AbortOnConnectFail = true,
AllowAdmin = true,
ConnectTimeout = m_ConnectTimeout,
SyncTimeout = m_Timeout,
ServiceName = "master",
TieBreaker = string.Empty,
Password = password,
};
connections.ForEach(s =>
{
connection.EndPoints.Add(s.Address, s.Port);
});
return connection;
}
private void _OutputConfigurationFromSentinel()
{
m_SentinelServers.ForEach(s =>
{
try
{
IServer server = m_Sentinel.GetServer(s.Address, s.Port);
if (server.IsConnected == true)
{
try
{
IPEndPoint master = server.SentinelGetMasterAddressByName("master") as IPEndPoint;
var slaves = server.SentinelSlaves("master");
StringBuilder sb = new StringBuilder();
sb.Append(">>>>>> _OutputConfigurationFromSentinel Server ");
sb.Append(s.Address);
sb.Append(" thinks that master is ");
sb.Append(master);
sb.Append(" and slaves are ");
foreach (var slave in slaves)
{
string name = slave.Where(i => i.Key == "name").Single().Value;
bool up = slave.Where(i => i.Key == "flags").Single().Value.Contains("disconnected") == false;
sb.Append(name);
sb.Append("(");
sb.Append(up == true ? "connected" : "down");
sb.Append(") ");
}
sb.Append(">>>>>>");
LoggerQueue.Debug(sb.ToString());
}
catch (Exception ex)
{
LoggerQueue.Error(string.Format(">>>>>> _OutputConfigurationFromSentinel Could not get configuration from sentinel server ({0}) >>>>>>", s.Address), ex);
}
}
else
{
LoggerQueue.Error(string.Format(">>>>>> _OutputConfigurationFromSentinel Sentinel server {0} was not connected", s.Address));
}
}
catch (Exception ex)
{
LoggerQueue.Error(string.Format(">>>>>> _OutputConfigurationFromSentinel Could not get IServer from sentinel ({0}) >>>>>>", s.Address), ex);
}
});
}
private RedisConnection[] _GetAllRedisServersFromSentinel()
{
// ask each sentinel server for its configuration
List<RedisConnection> redisServers = new List<RedisConnection>();
m_SentinelServers.ForEach(s =>
{
try
{
IServer server = m_Sentinel.GetServer(s.Address, s.Port);
if (server.IsConnected == true)
{
try
{
// store master in list
IPEndPoint master = server.SentinelGetMasterAddressByName("master") as IPEndPoint;
redisServers.Add(new RedisConnection { Address = master.Address.ToString(), Port = master.Port });
var slaves = server.SentinelSlaves("master");
foreach (var slave in slaves)
{
string address = slave.Where(i => i.Key == "ip").Single().Value;
string port = slave.Where(i => i.Key == "port").Single().Value;
redisServers.Add(new RedisConnection { Address = address, Port = Convert.ToInt32(port) });
}
}
catch (Exception ex)
{
LoggerQueue.Error(string.Format(">>>>>> _GetAllRedisServersFromSentinel Could not get redis servers from sentinel server ({0}) >>>>>>", s.Address), ex);
}
}
else
{
LoggerQueue.Error(string.Format(">>>>>> _GetAllRedisServersFromSentinel Sentinel server {0} was not connected", s.Address));
}
}
catch (Exception ex)
{
LoggerQueue.Error(string.Format(">>>>>> _GetAllRedisServersFromSentinel Could not get IServer from sentinel ({0}) >>>>>>", s.Address), ex);
}
});
return redisServers.Distinct().ToArray();
}
private IServer _GetRedisMasterFromSentinel()
{
// ask each sentinel server for its configuration
foreach (RedisConnection sentinel in m_SentinelServers)
{
IServer sentinelServer = _Sentinel.GetServer(sentinel.Address, sentinel.Port);
if (sentinelServer.IsConnected == true)
{
try
{
IPEndPoint master = sentinelServer.SentinelGetMasterAddressByName("master") as IPEndPoint;
return _Redis.GetServer(master);
}
catch (Exception ex)
{
LoggerQueue.Error(string.Format(">>>>>> Could not get redis master from sentinel server ({0}) >>>>>>", sentinel.Address), ex);
}
}
}
throw new InvalidOperationException("No sentinel server available to get master");
}
private void _ReconnectTimer(Nullable<Int32> reconnectMilliseconds)
{
try
{
lock (m_ReconnectLocker)
{
if (m_ReconnectTimer != null)
{
m_ReconnectTimer.Dispose();
m_ReconnectTimer = null;
}
// since a reconnect will definately occur we can stop the check timers for now until reconnect succeeds (where they are recreated)
_KillTimers();
LoggerQueue.Warn(">>>>>> REDIS STARTING RECONNECT TIMER >>>>>>");
m_ReconnectTimer = new Timer(_Connect, false, reconnectMilliseconds.GetValueOrDefault(m_ReconnectTime), Timeout.Infinite);
}
}
catch (Exception ex)
{
LoggerQueue.Error("Error during _ReconnectTimer", ex);
}
}
private void _CheckSubscriptions(object state)
{
if (Monitor.TryEnter(m_ConnectionLocker, TimeSpan.FromSeconds(1)) == false)
{
return;
}
try
{
DateTime now = DateTime.UtcNow;
foreach (RedisSubscription subscription in m_Subscriptions.Values)
{
if ((now - subscription.LastUsed) > TimeSpan.FromSeconds(subscription.MaxNoReceiveSeconds))
{
try
{
EndPoint endpoint = m_Redis.GetSubscriber().IdentifyEndpoint(subscription.Channel);
EndPoint subscribedEndpoint = m_Redis.GetSubscriber().SubscribedEndpoint(subscription.Channel);
LoggerQueue.Warn(string.Format(">>>>>> REDIS Channel '{0}' has not been used for longer than {1}s, IsConnected: {2}, IsConnectedChannel: {3}, EndPoint: {4}, SubscribedEndPoint: {5}, reconnecting...", subscription.Channel, subscription.MaxNoReceiveSeconds, m_Redis.GetSubscriber().IsConnected(), m_Redis.GetSubscriber().IsConnected(subscription.Channel), endpoint != null ? endpoint.ToString() : "null", subscribedEndpoint != null ? subscribedEndpoint.ToString() : "null"));
}
catch (Exception ex)
{
LoggerQueue.Error(string.Format(">>>>>> REDIS Error logging out details of Channel '{0}' reconnect", subscription.Channel), ex);
}
_ReconnectTimer(null);
return;
}
}
}
catch (Exception ex)
{
LoggerQueue.Error(">>>>>> REDIS Exception ERROR during _CheckSubscriptions", ex);
}
finally
{
Monitor.Exit(m_ConnectionLocker);
}
}
private void _CheckWrite(object state)
{
if (Monitor.TryEnter(m_ConnectionLocker, TimeSpan.FromSeconds(1)) == false)
{
return;
}
try
{
this.Database.HashSet(Environment.MachineName + "SmartoddsWriteCheck", m_CheckWriteGuid.ToString(), DateTime.UtcNow.Ticks);
}
catch (RedisConnectionNotReadyException)
{
LoggerQueue.Warn(">>>>>> REDIS RedisConnectionNotReadyException ERROR DURING _CheckWrite");
}
catch (RedisServerException ex)
{
LoggerQueue.Warn(">>>>>> REDIS RedisServerException ERROR DURING _CheckWrite, reconnecting... - " + ex.Message);
_ReconnectTimer(null);
}
catch (RedisConnectionException ex)
{
LoggerQueue.Warn(">>>>>> REDIS RedisConnectionException ERROR DURING _CheckWrite, reconnecting... - " + ex.Message);
_ReconnectTimer(null);
}
catch (TimeoutException ex)
{
LoggerQueue.Warn(">>>>>> REDIS TimeoutException ERROR DURING _CheckWrite - " + ex.Message);
}
catch (Exception ex)
{
LoggerQueue.Error(">>>>>> REDIS Exception ERROR during _CheckWrite", ex);
}
finally
{
Monitor.Exit(m_ConnectionLocker);
}
}
private void _ConnectionFailure(object sender, ConnectionFailedEventArgs e)
{
LoggerQueue.Warn(string.Format(">>>>>> REDIS CONNECTION FAILURE, {0}, {1}, {2} >>>>>>", e.ConnectionType, e.EndPoint.ToString(), e.FailureType));
}
private void _ConnectionRestored(object sender, ConnectionFailedEventArgs e)
{
LoggerQueue.Warn(string.Format(">>>>>> REDIS CONNECTION RESTORED, {0}, {1}, {2} >>>>>>", e.ConnectionType, e.EndPoint.ToString(), e.FailureType));
}
private void _SubscriptionHandler(string channel, RedisValue value)
{
// get handler lookup
RedisSubscription subscription = null;
if (m_Subscriptions.TryGetValue(channel, out subscription) == false || subscription == null)
{
return;
}
// update last used
subscription.LastUsed = DateTime.UtcNow;
// call handler
subscription.Handler(channel, value);
}
public Int64 Publish(string channel, RedisValue message)
{
try
{
return _Redis.GetSubscriber().Publish(channel, message);
}
catch (RedisConnectionNotReadyException)
{
LoggerQueue.Error("REDIS RedisConnectionNotReadyException ERROR DURING Publish");
throw;
}
catch (RedisServerException ex)
{
LoggerQueue.Error("REDIS RedisServerException ERROR DURING Publish - " + ex.Message);
throw;
}
catch (RedisConnectionException ex)
{
LoggerQueue.Error("REDIS RedisConnectionException ERROR DURING Publish - " + ex.Message);
throw;
}
catch (TimeoutException ex)
{
LoggerQueue.Error("REDIS TimeoutException ERROR DURING Publish - " + ex.Message);
throw;
}
catch (Exception ex)
{
LoggerQueue.Error("REDIS Exception ERROR DURING Publish", ex);
throw;
}
}
public bool LockTake(RedisKey key, RedisValue value, TimeSpan expiry)
{
return _Execute(() => this.Database.LockTake(key, value, expiry));
}
public bool LockExtend(RedisKey key, RedisValue value, TimeSpan extension)
{
return _Execute(() => this.Database.LockExtend(key, value, extension));
}
public bool LockRelease(RedisKey key, RedisValue value)
{
return _Execute(() => this.Database.LockRelease(key, value));
}
private void _Execute(Action action)
{
try
{
action.Invoke();
}
catch (RedisServerException ex)
{
LoggerQueue.Error("REDIS RedisServerException ERROR DURING _Execute - " + ex.Message);
throw;
}
catch (RedisConnectionException ex)
{
LoggerQueue.Error("REDIS RedisConnectionException ERROR DURING _Execute - " + ex.Message);
throw;
}
catch (TimeoutException ex)
{
LoggerQueue.Error("REDIS TimeoutException ERROR DURING _Execute - " + ex.Message);
throw;
}
catch (Exception ex)
{
LoggerQueue.Error("REDIS Exception ERROR DURING _Execute", ex);
throw;
}
}
private TResult _Execute<TResult>(Func<TResult> function)
{
try
{
return function.Invoke();
}
catch (RedisServerException ex)
{
LoggerQueue.Error("REDIS RedisServerException ERROR DURING _Execute - " + ex.Message);
throw;
}
catch (RedisConnectionException ex)
{
LoggerQueue.Error("REDIS RedisConnectionException ERROR DURING _Execute - " + ex.Message);
throw;
}
catch (TimeoutException ex)
{
LoggerQueue.Error("REDIS TimeoutException ERROR DURING _Execute - " + ex.Message);
throw;
}
catch (Exception ex)
{
LoggerQueue.Error("REDIS ERROR DURING _Execute", ex);
throw;
}
}
public string[] GetAllKeys(string pattern)
{
if (m_Sentinel != null)
{
return _GetAnyRedisSlaveFromSentinel().Keys(m_DatabaseId, pattern).Select(k => (string)k).ToArray();
}
else
{
return _Redis.GetServer(_Redis.GetEndPoints().First()).Keys(m_DatabaseId, pattern).Select(k => (string)k).ToArray();
}
}
private void _KillSentinelClient()
{
try
{
if (m_Sentinel != null)
{
LoggerQueue.Debug(">>>>>> KILLING SENTINEL CONNECTION >>>>>>");
ConnectionMultiplexer sentinel = m_Sentinel;
m_Sentinel = null;
sentinel.Close(false);
sentinel.Dispose();
}
}
catch (Exception ex)
{
LoggerQueue.Error(">>>>>> Error during _KillSentinelClient", ex);
}
}
private void _KillRedisClient()
{
try
{
if (m_Redis != null)
{
Stopwatch sw = Stopwatch.StartNew();
LoggerQueue.Debug(">>>>>> KILLING REDIS CONNECTION >>>>>>");
ConnectionMultiplexer redis = m_Redis;
m_Redis = null;
if (this.IsSentinel == true)
{
redis.ConnectionFailed -= _ConnectionFailure;
redis.ConnectionRestored -= _ConnectionRestored;
}
redis.Close(false);
redis.Dispose();
LoggerQueue.Debug(">>>>>> KILLED REDIS CONNECTION >>>>>> " + sw.Elapsed);
}
}
catch (Exception ex)
{
LoggerQueue.Error(">>>>>> Error during _KillRedisClient", ex);
}
}
private void _KillClients()
{
lock (m_ConnectionLocker)
{
_KillSentinelClient();
_KillRedisClient();
}
}
private void _KillTimers()
{
if (m_CheckSubscriptionsTimer != null)
{
m_CheckSubscriptionsTimer.Dispose();
m_CheckSubscriptionsTimer = null;
}
if (m_CheckWriteTimer != null)
{
m_CheckWriteTimer.Dispose();
m_CheckWriteTimer = null;
}
}
public void Dispose()
{
_KillClients();
_KillTimers();
}
}
}
I just asked this question, and found a similar question to yours and mine which I believe answers the question of how does our code (the client) know now which is the new master server when the current master goes down?
How to tell a Client where the new Redis master is using Sentinel
Apparently you just have to subscribe and listen to events from the Sentinels. Makes sense.. I just figured there was a more streamlined way.
I read something about the Twemproxy for Linux which acts as a proxy and probably does this for you? But I was on redis for Windows and was trying to find a Windows option. We might just moved to Linux if that's the approved way it should be done.
Today (I just configured StackExchange.Redis 2.1.58 to use sentinel) it's enough to specify a sentinel endpoint and serviceName in the redis connection string or Configuration. All the rest has been encapsulated as a part of this commit. So you just point stackexchange.redis to your sentinel nodes and ConnectionMuliplexer gives you up and running IDatabase each time you call GetDatabase().
var conn = ConnectionMultiplexer.Connect("sentinel:26379,serviceName=mymaster");
var db = conn.GetDatabase();
db.StringSet("key", "value");

Sending Emails Asychrounously Problems

I had configure smpt mail for my site and it's work when I tried to send one single email but I have following error when I want to send it to more people, In addition I use SendAsyn method.
When I Send all Emails using LOOP
Syntax error, command unrecognized. The server response was:
at System.Net.Mail.SmtpConnection.ConnectAndHandshakeAsyncResult.End(IAsyncResult result)
at System.Net.Mail.SmtpClient.ConnectCallback(IAsyncResult result)
When I Add All Emails to BCC
Service not available, closing transmission channel.
The server response was: Too many bad commands, closing transmission channel
at System.Net.Mail.SendMailAsyncResult.End(IAsyncResult result)
at System.Net.Mail.SmtpTransport.EndSendMail(IAsyncResult result)
at System.Net.Mail.SmtpClient.SendMailCallback(IAsyncResult result
what is the solution for that ?
I have a similar situation whereby l am sending multiple emails and not waiting for one to finish before sending another.
What l did was initiate a new SMTPClient for every mail to be sent and send asynchronously. Like this:
private void SendMailAsync(string ids, MailMessage mail)
{
SmtpClient client = null;
try
{
client = new SmtpClient(ConfigurationManager.AppSettings["MailServer"], Convert.ToInt32(ConfigurationManager.AppSettings["MailPort"]));
string userState = "MailQueueID_" + ids;
client.SendCompleted += (sender, e) =>
{
// Get the unique identifier for this asynchronous operation
String token = (string)e.UserState;
DateTime now = DateTime.Now;
try
{
if (e.Cancelled)
{
LogError(new Exception(token + " - Callback cancelled"));
return;
}
if (e.Error != null)
{
LogError(e.Error);
}
else
{
logWriter.WriteToLog(this.jobSite + " - " + token + " (Email sent)");
try
{
int updated = UpdateMailQueue(token, now);
if (updated > 0)
{
// Update your log
}
}
catch (SqlException sqlEx)
{
LogError(sqlEx);
}
}
}
catch (ArgumentNullException argument)
{
LogError(argument);
}
finally
{
client.SendCompleted -= client_SendCompleted;
client.Dispose();
mail.Dispose();
// Delete the attachment if any, attached to this email
DeleteZipFile(token);
counter--;
}
};
client.SendAsync(mail, userState);
counter++;
}
catch (ArgumentOutOfRangeException argOutOfRange)
{
LogError(argOutOfRange);
}
catch (ConfigurationErrorsException configErrors)
{
LogError(configErrors);
}
catch (ArgumentNullException argNull)
{
LogError(argNull);
}
catch (ObjectDisposedException objDisposed)
{
LogError(objDisposed);
}
catch (InvalidOperationException invalidOperation)
{
LogError(invalidOperation);
}
catch (SmtpFailedRecipientsException failedRecipients)
{
LogError(failedRecipients);
}
catch (SmtpFailedRecipientException failedRecipient)
{
LogError(failedRecipient);
}
catch (SmtpException smtp)
{
LogError(smtp);
}
}
The error was caught in the SendCompletedEvent Handler.
Of course the error appeared for only one email while the other 7 went thru to different mail boxes both before and after it in the same run. What caused the error, l still don't know.
When l ran my program again, it picked up the mail that was not sent and sent it off successfully.
Hope this helps others cos l realise that the question was posted more than 15 months ago.

Categories