EasyNetQ - How to retry failed messages & persist RetryCount in message body/header? - c#

I am using EasyNetQ and need to retry failed messages on the original queue. The problem is: even though I successfully increment the TriedCount variable (in the body of every msg), when EasyNetQ publishes the message to the default error queue after an exception, the updated TriedCount is not in the msg! Presumably because it just dumps the original message to the error queue without the consumer's changes.
The updated TriedCount works for in-process republishes, but not when republished through EasyNetQ Hosepipe or EasyNetQ Management Client. The text files Hosepipe generates do not have the TriedCount updated.
public interface IMsgHandler<T> where T: class, IMessageType
{
Task InvokeMsgCallbackFunc(T msg);
Func<T, Task> MsgCallbackFunc { get; set; }
bool IsTryValid(T msg, string refSubscriptionId); // Calls callback only
// if Retry is valid
}
public interface IMessageType
{
int MsgTypeId { get; }
Dictionary<string, TryInfo> MsgTryInfo {get; set;}
}
public class TryInfo
{
public int TriedCount { get; set; }
/*Other information regarding msg attempt*/
}
public bool SubscribeAsync<T>(Func<T, Task> eventHandler, string subscriptionId)
{
IMsgHandler<T> currMsgHandler = new MsgHandler<T>(eventHandler, subscriptionId);
// Using the msgHandler allows to add a mediator between EasyNetQ and the actual callback function
// The mediator can transmit the retried msg or choose to ignore it
return _defaultBus.SubscribeAsync<T>(subscriptionId, currMsgHandler.InvokeMsgCallbackFunc).Queue != null;
}
I have also tried republishing myself through the Management API (rough code):
var client = new ManagementClient("http://localhost", "guest", "guest");
var vhost = client.GetVhostAsync("/").Result;
var errQueue = client.GetQueueAsync("EasyNetQ_Default_Error_Queue",
vhost).Result;
var crit = new GetMessagesCriteria(long.MaxValue,
Ackmodes.ack_requeue_true);
var errMsgs = client.GetMessagesFromQueueAsync(errQueue,
crit).Result;
foreach (var errMsg in errMsgs)
{
var pubRes = client.PublishAsync(client.GetExchangeAsync(errMsg.Exchange, vhost).Result,
new PublishInfo(errMsg.RoutingKey, errMsg.Payload)).Result;
}
This works but only publishes to the error queue again, not on the original queue. Also, I don't know how to add/update the retry information in the body of the message at this stage.
I have explored this library to add headers to the message but I don't see if the count in the body is not being updated, how/why would the count in the header be updated.
Is there any way to persist the TriedCount without resorting to the Advanced bus (in which case I might use the RabbitMQ .Net client itself)?

Just in case it helps someone else, I eventually implemented my own IErrorMessageSerializer (as opposed to implementing the whole IConsumerErrorStrategy, which seemed like an overkill). The reason I am adding the retry info in the body (instead of the header) is that EasyNetQ doesn't handle complex types in the header (not out-of-the-box anyway). So, using a dictionary gives more control for different consumers. I register the custom serializer at the time of creating the bus like so:
_defaultBus = RabbitHutch.CreateBus(currentConnString, serviceRegister => serviceRegister.Register<IErrorMessageSerializer>(serviceProvider => new RetryEnabledErrorMessageSerializer<IMessageType>(givenSubscriptionId)));
And just implemented the Serialize method like so:
public class RetryEnabledErrorMessageSerializer<T> : IErrorMessageSerializer where T : class, IMessageType
{
public string Serialize(byte[] messageBody)
{
string stringifiedMsgBody = Encoding.UTF8.GetString(messageBody);
var objectifiedMsgBody = JObject.Parse(stringifiedMsgBody);
// Add/update RetryInformation into objectifiedMsgBody here
// I have a dictionary that saves <key:consumerId, val: TryInfoObj>
return JsonConvert.SerializeObject(objectifiedMsgBody);
}
}
The actual retrying is done by a simple console app/windows service periodically via the EasyNetQ Management API:
var client = new ManagementClient(AppConfig.BaseAddress, AppConfig.RabbitUsername, AppConfig.RabbitPassword);
var vhost = client.GetVhostAsync("/").Result;
var aliveRes = client.IsAliveAsync(vhost).Result;
var errQueue = client.GetQueueAsync(Constants.EasyNetQErrorQueueName, vhost).Result;
var crit = new GetMessagesCriteria(long.MaxValue, Ackmodes.ack_requeue_false);
var errMsgs = client.GetMessagesFromQueueAsync(errQueue, crit).Result;
foreach (var errMsg in errMsgs)
{
var innerMsg = JsonConvert.DeserializeObject<Error>(errMsg.Payload);
var pubInfo = new PublishInfo(innerMsg.RoutingKey, innerMsg.Message);
pubInfo.Properties.Add("type", innerMsg.BasicProperties.Type);
pubInfo.Properties.Add("correlation_id", innerMsg.BasicProperties.CorrelationId);
pubInfo.Properties.Add("delivery_mode", innerMsg.BasicProperties.DeliveryMode);
var pubRes = client.PublishAsync(client.GetExchangeAsync(innerMsg.Exchange, vhost).Result,
pubInfo).Result;
}
Whether retry is enabled or not is known by my consumer itself, giving it more control so it can choose to handle the retried msg or just ignore it. Once ignored, the msg will obviously not be tried again; that's how EasyNetQ works.

Related

Exposing request method in generated Service Reference

In a generated Service Reference (imported from a WSDL), I have the following methods in the Client class, in the Reference.cs:
public Namespace.Service.SalesOrderDetail newService(Namespace.Service.Contact orderContact, Namespace.Service.Contact installationContact, string customerReference, Namespace.Service.ServiceDetails[] serviceDetailsList) {
Namespace.Service.newServiceRequest inValue = new Namespace.Service.newServiceRequest();
inValue.orderContact = orderContact;
inValue.installationContact = installationContact;
inValue.customerReference = customerReference;
inValue.serviceDetailsList = serviceDetailsList;
Namespace.Service.newServiceResponse retVal = ((Namespace.Service.ServiceRequestPortType)(this)).newService(inValue);
return retVal.salesOrder;
}
[System.ComponentModel.EditorBrowsableAttribute(System.ComponentModel.EditorBrowsableState.Advanced)]
System.Threading.Tasks.Task<Namespace.Service.newServiceResponse> Namespace.Service.ServiceRequestPortType.newServiceAsync(Namespace.Service.newServiceRequest request) {
return base.Channel.newServiceAsync(request);
}
public System.Threading.Tasks.Task<Namespace.Service.newServiceResponse> newServiceAsync(Namespace.Service.Contact orderContact, Namespace.Service.Contact installationContact, string customerReference, Namespace.Service.ServiceDetails[] serviceDetailsList) {
Namespace.Service.newServiceRequest inValue = new Namespace.Service.newServiceRequest();
inValue.orderContact = orderContact;
inValue.installationContact = installationContact;
inValue.customerReference = customerReference;
inValue.serviceDetailsList = serviceDetailsList;
return ((Namespace.Service.ServiceRequestPortType)(this)).newServiceAsync(inValue);
}
I've seen Python code that uses the same WSDL, and it is able to access the method as response = client.newService(request).
I'd also like to access the method in that fashion, albeit var task = client.newService(request); Task.WaitAll(task); var response = task.Result;, but I can't seem to find the right combo of creating the service reference, without being forced to have expanded input parameters to the service.
Is there a magic combo for Service Reference creation that will allow me to just pass the request as a single object?
I'm not fussed on keeping the async functionality.
The client of a service implements the interface that represents the service. It just so happens, and is shown in this example, that it doesn't necessarily make all those implemented method public.
So, to get around this, if I cast the client object to the service interface, I get to call the service as intended, regardless of what the client has made public.
var client = new ServiceClient();
var service = (Service)client;
var request = new newServiceRequest() { ... };
var response = service.newService(request);
client.Close();

log4net implementation detail - custom appender

I've implemented a custom log4net appender that writes to an http service... works well, but I am suffering some premature optimization in my head. Specifically, is there a better way to do it? I guess I can make sure that only critical classes have that particular apprender, but it feels like that there could be a lot of appenders and a liability even with conservative logging options.
Does anyone have experience that they would like to share? I've looked at http://geekswithblogs.net/michaelstephenson/archive/2014/01/02/155044.aspx which is essentially what I am doing... (see code) How well does something like this scale? I like the factory for the singleton... what about implementing a concurrent queue to buffer the writes?
Hopefully I won't get spanked too hard by the admin for asking an (potentially opinion) best practice question.
(adding code from article for clarification)
public class ServiceBusAppender : AppenderSkeleton
{
public string ConnectionStringKey { get; set; }
public string MessagingEntity { get; set; }
public string ApplicationName { get; set; }
public string EventType { get; set; }
public bool Synchronous { get; set; }
public string CorrelationIdPropertyName { get; set; }
protected override void Append(log4net.Core.LoggingEvent loggingEvent)
{
var myLogEvent = new AzureLoggingEvent(loggingEvent);
myLogEvent.ApplicationName = ApplicationName;
myLogEvent.EventType = EventType;
myLogEvent.CorrelationId = loggingEvent.LookupProperty(CorrelationIdPropertyName) as string;
if (Synchronous)
AppendInternal(myLogEvent, 0);
else
{
Task.Run(() => AppendInternal(myLogEvent, 0));
}
}
protected void AppendInternal(AzureLoggingEvent myLogEvent, int attemptNo)
{
try
{
//Convert event to JSON
var stream = new MemoryStream();
var json = Newtonsoft.Json.JsonConvert.SerializeObject(myLogEvent);
var writer = new StreamWriter(stream);
writer.Write(json);
writer.Flush();
stream.Seek(0, SeekOrigin.Begin);
//Setup service bus message
var message = new BrokeredMessage(stream, true);
message.ContentType = "application/json";
message.Label = myLogEvent.MessageType;
message.Properties.Add(new KeyValuePair<string, object>("ApplicationName", myLogEvent.ApplicationName));
message.Properties.Add(new KeyValuePair<string, object>("UserName", myLogEvent.UserName));
message.Properties.Add(new KeyValuePair<string, object>("MachineName", myLogEvent.MachineName));
message.Properties.Add(new KeyValuePair<string, object>("MessageType", myLogEvent.MessageType));
message.Properties.Add(new KeyValuePair<string, object>("Level", myLogEvent.Level));
message.Properties.Add(new KeyValuePair<string, object>("EventType", myLogEvent.EventType));
//Setup Service Bus Connection
var connection = ConfigurationManager.ConnectionStrings[ConnectionStringKey];
if (connection == null || string.IsNullOrEmpty(connection.ConnectionString))
{
ErrorHandler.Error("Cant publish the error, the connection string does not exist");
return;
}
var factory = MessagingFactoryManager.Instance.GetMessagingFactory(connection.ConnectionString);
var sender = factory.CreateMessageSender(MessagingEntity);
//Publish
sender.Send(message);
}
catch (Exception ex)
{
if (ex.Message.Contains("The operation cannot be performed because the entity has been closed or aborted"))
{
if (attemptNo < 3)
AppendInternal(myLogEvent, attemptNo++);
else
ErrorHandler.Error("Error occured while publishing error", ex);
}
else
ErrorHandler.Error("Error occured while publishing error", ex);
}
}
protected override void Append(log4net.Core.LoggingEvent[] loggingEvents)
{
foreach(var loggingEvent in loggingEvents)
{
Append(loggingEvent);
}
}
Thx,
Chris
The cure for premature optimisation is to test and measure, then test and measure again. Write an integration test that logs to a thousand loggers, and see how that goes.
If that does show a problem, then rather than implement your own queue, inherit from BufferingAppenderSkeleton instead:
This base class should be used by appenders that need to buffer a
number of events before logging them. For example the AdoNetAppender
buffers events and then submits the entire contents of the buffer to
the underlying database in one go.
Subclasses should override the SendBuffer method to deliver the
buffered events.
The BufferingAppenderSkeleton maintains a fixed size cyclic buffer of events. The size of the buffer is set using the BufferSize property.
(As an aside, what is up with the log4net documentation, there seem to be more '½ï¿' characters every time I look at it?)
I see that your code involves JSON serialization. If you're looking for log4net JSON, why redo what has been done already? See log4net.ext.json. I'm the developer. The wiki covers the first steps on how to get it up and running. It is used in place of a layout so it can be plugged into any log4net appender that takes a layout.
Part of my project I have also created a load testing GUI for log4net. It is not released, but it should compile easily from source. You can use that to discover how different configurations scale in your conditions.
Finally, I'd advise to give LOCALHOST UDP delivery a shot if performance is priority. Projects like nxlog or logstash can swallow that easily. Again, why write new code?
Let me know if you need some clarification. Kind regards and good luck, Rob

How to ensure message reception order in MassTransit

I have a saga that has 3 states; Initial, ReceivingRows, Completed -
public static State Initial { get; set; }
public static State ReceivingRows { get; set; }
public static State Completed { get; set; }
It transitions from Initial to ReceivingRows when it gets a BofMessage (where Bof = Beginning of file). After the BofMessage, it receives a large number of RowMessages where each describes a row in a flat file. Once all RowMessages are sent, an EofMessage is sent and the state changes to Completed. Observe -
static void DefineSagaBehavior()
{
Initially(When(ReceivedBof)
.Then((saga, message) => saga.BeginFile(message))
.TransitionTo(ReceivingRows));
During(ReceivingRows, When(ReceivedRow)
.Then((saga, message) => saga.AddRow(message)));
During(ReceivingRows, When(ReceivedRowError)
.Then((saga, message) => saga.RowError(message)));
During(ReceivingRows, When(ReceivedEof)
.Then((saga, message) => saga.EndFile(message))
.TransitionTo(Completed));
}
This works, except sometimes several RowMessages are received before the BofMessage! This is regardless of the order that I sent them. This means that the messages will be received and ultimately counted as errors, causing them to be missing from the database or file that I finally write them out to.
As a temporary fix, I add a little sleep timer hack in this method that does all the publishing –
public static void Publish(
[NotNull] IServiceBus serviceBus,
[NotNull] string publisherName,
Guid correlationId,
[NotNull] Tuple<string, string> inputFileDescriptor,
[NotNull] string outputFileName)
{
// attempt to load offsets
var offsetsResult = OffsetParser.Parse(inputFileDescriptor.Item1);
if (offsetsResult.Result != ParseOffsetsResult.Success)
{
// publish an offsets invalid message
serviceBus.Publish<TErrorMessage>(CombGuid.Generate(), publisherName, inputFileDescriptor.Item2);
return;
}
// publish beginning of file
var fullInputFilePath = Path.GetFullPath(inputFileDescriptor.Item2);
serviceBus.Publish<TBofMessage>(correlationId, publisherName, fullInputFilePath);
// HACK: make sure bof message happens before row messages, or else some row messages won't be received
Thread.Sleep(5000);
// publish rows from feed
var feedResult = FeedParser.Parse(inputFileDescriptor.Item2, offsetsResult.Offsets);
foreach (var row in feedResult)
{
// publish row message, unaligned if applicable
if (row.Result != ParseRowResult.Success)
serviceBus.Publish<TRowErrorMessage>(correlationId, publisherName, row.Fields);
else
serviceBus.Publish<TRowMessage>(correlationId, publisherName, row.Fields);
}
// publish end of file
serviceBus.Publish<TEofMessage>(correlationId, publisherName, outputFileName);
}
It’s a 5 second sleep-timer, and is quite an ugly hack. Can anyone inform me why I’m not getting the messages in the order I send them? Can I ensure that these message get sent in the right order if they are unordered by default?
Thank you!
Please note this is cross-posted from http://groups.google.com/group/masstransit-discuss/browse_thread/thread/7bd9518a690db4bb for expedience.
You cannot ensure messages get delivered in any order. You can get close in MT by ensuring there's only one concurrent consumer on the consumer side, I still wouldn't depend on this behaviour (http://docs.masstransit-project.com/en/latest/overview/keyideas.html#handlers). This would effectively make your consumer single threaded.

Is this a good/preferable pattern to Azure Queue construction for a T4 template?

I'm building a T4 template that will help people construct Azure queues in a consistent and simple manner. I'd like to make this self-documenting, and somewhat consistent.
First I made the queue name at the top of the file, the queue names have to be in lowercase so I added ToLower()
The public constructor uses the built-in StorageClient API's to access the connection strings. I've seen many different approaches to this, and would like to get something that works in almost all situations. (ideas? do share)
I dislike the unneeded HTTP requests to check if the queues have been created so I made is a static bool . I didn't implement a Lock(monitorObject) since I don't think one is needed.
Instead of using a string and parsing it with commas (like most MSDN documentation) I'm serializing the object when passing it into the queue.
For further optimization I'm using a JSON serializer extension method to get the most out of the 8k limit. Not sure if an encoding will help optimize this any more
Added retry logic to handle certain scenarios that occur with the queue (see html link)
Q: Is "DataContext" appropriate name for this class?
Q: Is it a poor practice to name the Queue Action Name in the manner I have done?
What additional changes do you think I should make?
public class AgentQueueDataContext
{
// Queue names must always be in lowercase
// Is named like a const, but isn't one because .ToLower won't compile...
static string AGENT_QUEUE_ACTION_NAME = "AgentQueueActions".ToLower();
static bool QueuesWereCreated { get; set; }
DataModel.SecretDataSource secDataSource = null;
CloudStorageAccount cloudStorageAccount = null;
CloudQueueClient cloudQueueClient = null;
CloudQueue queueAgentQueueActions = null;
static AgentQueueDataContext()
{
QueuesWereCreated = false;
}
public AgentQueueDataContext() : this(false)
{
}
public AgentQueueDataContext(bool CreateQueues)
{
// This pattern of setting up queues is from:
// ttp://convective.wordpress.com/2009/11/15/queues-azure-storage-client-v1-0/
//
this.cloudStorageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
this.cloudQueueClient = cloudStorageAccount.CreateCloudQueueClient();
this.secDataSource = new DataModel.SecretDataSource();
queueAgentQueueActions = cloudQueueClient.GetQueueReference(AGENT_QUEUE_ACTION_NAME);
if (QueuesWereCreated == false || CreateQueues)
{
queueAgentQueueActions.CreateIfNotExist();
QueuesWereCreated = true;
}
}
// This is the method that will be spawned using ThreadStart
public void CheckQueue()
{
while (true)
{
try
{
CloudQueueMessage msg = queueAgentQueueActions.GetMessage();
bool DoRetryDelayLogic = false;
if (msg != null)
{
// Deserialize using JSON (allows more data to be stored)
AgentQueueEntry actionableMessage = msg.AsString.FromJSONString<AgentQueueEntry>();
switch (actionableMessage.ActionType)
{
case AgentQueueActionEnum.EnrollNew:
{
// Add to
break;
}
case AgentQueueActionEnum.LinkToSite:
{
// Link within Agent itself
// Link within Site
break;
}
case AgentQueueActionEnum.DisableKey:
{
// Disable key in site
// Disable key in AgentTable (update modification time)
break;
}
default:
{
break;
}
}
//
// Only delete the message if the requested agent has been missing for
// at least 10 minutes
//
if (DoRetryDelayLogic)
{
if (msg.InsertionTime != null)
if (msg.InsertionTime < DateTime.UtcNow + new TimeSpan(0, 10, 10))
continue;
// ToDo: Log error: AgentID xxx has not been found in table for xxx minutes.
// It is likely the result of a the registratoin host crashing.
// Data is still consistent. Deleting queued message.
}
//
// If execution made it to this point, then we are either fully processed, or
// there is sufficent reason to discard the message.
//
try
{
queueAgentQueueActions.DeleteMessage(msg);
}
catch (StorageClientException ex)
{
// As of July 2010, this is the best way to detect this class of exception
// Description: ttp://blog.smarx.com/posts/deleting-windows-azure-queue-messages-handling-exceptions
if (ex.ExtendedErrorInformation.ErrorCode == "MessageNotFound")
{
// pop receipt must be invalid
// ignore or log (so we can tune the visibility timeout)
}
else
{
// not the error we were expecting
throw;
}
}
}
else
{
// allow control to fall to the bottom, where the sleep timer is...
}
}
catch (Exception e)
{
// Justification: Thread must not fail.
//Todo: Log this exception
// allow control to fall to the bottom, where the sleep timer is...
// Rationale: not doing so may cause queue thrashing on a specific corrupt entry
}
// todo: Thread.Sleep() is bad
// Replace with something better...
Thread.Sleep(9000);
}
Q: Is "DataContext" appropriate name for this class?
In .NET we have a lot of DataContext classes, so in the sense that you want names to appropriately communicate what the class does, I think XyzQueueDataContext properly communicates what the class does - although you can't query from it.
If you want to stay more aligned to accepted pattern languages, Patterns of Enterprise Application Architecture calls any class that encapsulates access to an external system for a Gateway, while more specifically you may want to use the term Channel in the language of Enterprise Integration Patterns - that's what I would do.
Q: Is it a poor practice to name the Queue Action Name in the manner I have done?
Well, it certainly tightly couples the queue name to the class. This means that if you later decide that you want to decouple those, you can't.
As a general comment I think this class might benefit from trying to do less. Using the queue is not the same thing as managing it, so instead of having all of that queue management code there, I'd suggest injecting a CloudQueue into the instance. Here's how I implement my AzureChannel constructor:
private readonly CloudQueue queue;
public AzureChannel(CloudQueue queue)
{
if (queue == null)
{
throw new ArgumentNullException("queue");
}
this.queue = queue;
}
This better fits the Single Responsibility Principle and you can now implement queue management in its own (reusable) class.

WCF MessageHeaders in OperationContext.Current

If I use code like this [just below] to add Message Headers to my OperationContext, will all future out-going messages contain that data on any new ClientProxy defined from the same "run" of my application?
The objective, is to pass a parameter or two to each OpeartionContract w/out messing with the signature of the OperationContract, since the parameters being passed will be consistant for all requests for a given run of my client application.
public void DoSomeStuff()
{
var proxy = new MyServiceClient();
Guid myToken = Guid.NewGuid();
MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken);
MessageHeader untyped = mhg.GetUntypedHeader("token", "ns");
OperationContext.Current.OutgoingMessageHeaders.Add(untyped);
proxy.DoOperation(...);
}
public void DoSomeOTHERStuff()
{
var proxy = new MyServiceClient();
Guid myToken = Guid.NewGuid();
MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken);
MessageHeader untyped = mhg.GetUntypedHeader("token", "ns");
OperationContext.Current.OutgoingMessageHeaders.Add(untyped);
proxy.DoOtherOperation(...);
}
In other words, is it safe to refactor the above code like this?
bool isSetup = false;
public void SetupMessageHeader()
{
if(isSetup) { return; }
Guid myToken = Guid.NewGuid();
MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken);
MessageHeader untyped = mhg.GetUntypedHeader("token", "ns");
OperationContext.Current.OutgoingMessageHeaders.Add(untyped);
isSetup = true;
}
public void DoSomeStuff()
{
var proxy = new MyServiceClient();
SetupMessageHeader();
proxy.DoOperation(...);
}
public void DoSomeOTHERStuff()
{
var proxy = new MyServiceClient();
SetupMessageHeader();
proxy.DoOtherOperation(...);
}
Since I don't really understand what's happening there, I don't want to cargo cult it and just change it and let it fly if it works, I'd like to hear your thoughts on if it is OK or not.
I think your refactored code doesn't put any added-value. Have you taken in account that the OperationContext can be null?
I think this will be a safer approach:
using(OperationContextScope contextScope =
new OperationContextScope(proxy.InnerChannel))
{
.....
OperationContext.Current.OutgoingMessageHeaders.Add(untyped);
proxy.DoOperation(...);
}
OperationContextScope's constructor will always cause replacement of the Operation context of the current thread; The OperationContextScope's Dispose method is called which restores the old context preventing problems with other objects on the same thread.
I believe your OperationContext is going to get wiped each time you new the proxy.
You should plan on adding the custom message headers prior to each call. This is good practice in any case as you should prefer per call services and close the channel after each call.
There are a couple patterns for managing custom headers.
You can create the header as part of the constructor to the proxy.
Alternatively, you can extend the binding with a behavior that automatically adds the custom header prior to making each call. This is a good example: http://weblogs.asp.net/avnerk...

Categories