log4net implementation detail - custom appender - c#

I've implemented a custom log4net appender that writes to an http service... works well, but I am suffering some premature optimization in my head. Specifically, is there a better way to do it? I guess I can make sure that only critical classes have that particular apprender, but it feels like that there could be a lot of appenders and a liability even with conservative logging options.
Does anyone have experience that they would like to share? I've looked at http://geekswithblogs.net/michaelstephenson/archive/2014/01/02/155044.aspx which is essentially what I am doing... (see code) How well does something like this scale? I like the factory for the singleton... what about implementing a concurrent queue to buffer the writes?
Hopefully I won't get spanked too hard by the admin for asking an (potentially opinion) best practice question.
(adding code from article for clarification)
public class ServiceBusAppender : AppenderSkeleton
{
public string ConnectionStringKey { get; set; }
public string MessagingEntity { get; set; }
public string ApplicationName { get; set; }
public string EventType { get; set; }
public bool Synchronous { get; set; }
public string CorrelationIdPropertyName { get; set; }
protected override void Append(log4net.Core.LoggingEvent loggingEvent)
{
var myLogEvent = new AzureLoggingEvent(loggingEvent);
myLogEvent.ApplicationName = ApplicationName;
myLogEvent.EventType = EventType;
myLogEvent.CorrelationId = loggingEvent.LookupProperty(CorrelationIdPropertyName) as string;
if (Synchronous)
AppendInternal(myLogEvent, 0);
else
{
Task.Run(() => AppendInternal(myLogEvent, 0));
}
}
protected void AppendInternal(AzureLoggingEvent myLogEvent, int attemptNo)
{
try
{
//Convert event to JSON
var stream = new MemoryStream();
var json = Newtonsoft.Json.JsonConvert.SerializeObject(myLogEvent);
var writer = new StreamWriter(stream);
writer.Write(json);
writer.Flush();
stream.Seek(0, SeekOrigin.Begin);
//Setup service bus message
var message = new BrokeredMessage(stream, true);
message.ContentType = "application/json";
message.Label = myLogEvent.MessageType;
message.Properties.Add(new KeyValuePair<string, object>("ApplicationName", myLogEvent.ApplicationName));
message.Properties.Add(new KeyValuePair<string, object>("UserName", myLogEvent.UserName));
message.Properties.Add(new KeyValuePair<string, object>("MachineName", myLogEvent.MachineName));
message.Properties.Add(new KeyValuePair<string, object>("MessageType", myLogEvent.MessageType));
message.Properties.Add(new KeyValuePair<string, object>("Level", myLogEvent.Level));
message.Properties.Add(new KeyValuePair<string, object>("EventType", myLogEvent.EventType));
//Setup Service Bus Connection
var connection = ConfigurationManager.ConnectionStrings[ConnectionStringKey];
if (connection == null || string.IsNullOrEmpty(connection.ConnectionString))
{
ErrorHandler.Error("Cant publish the error, the connection string does not exist");
return;
}
var factory = MessagingFactoryManager.Instance.GetMessagingFactory(connection.ConnectionString);
var sender = factory.CreateMessageSender(MessagingEntity);
//Publish
sender.Send(message);
}
catch (Exception ex)
{
if (ex.Message.Contains("The operation cannot be performed because the entity has been closed or aborted"))
{
if (attemptNo < 3)
AppendInternal(myLogEvent, attemptNo++);
else
ErrorHandler.Error("Error occured while publishing error", ex);
}
else
ErrorHandler.Error("Error occured while publishing error", ex);
}
}
protected override void Append(log4net.Core.LoggingEvent[] loggingEvents)
{
foreach(var loggingEvent in loggingEvents)
{
Append(loggingEvent);
}
}
Thx,
Chris

The cure for premature optimisation is to test and measure, then test and measure again. Write an integration test that logs to a thousand loggers, and see how that goes.
If that does show a problem, then rather than implement your own queue, inherit from BufferingAppenderSkeleton instead:
This base class should be used by appenders that need to buffer a
number of events before logging them. For example the AdoNetAppender
buffers events and then submits the entire contents of the buffer to
the underlying database in one go.
Subclasses should override the SendBuffer method to deliver the
buffered events.
The BufferingAppenderSkeleton maintains a fixed size cyclic buffer of events. The size of the buffer is set using the BufferSize property.
(As an aside, what is up with the log4net documentation, there seem to be more '½ï¿' characters every time I look at it?)

I see that your code involves JSON serialization. If you're looking for log4net JSON, why redo what has been done already? See log4net.ext.json. I'm the developer. The wiki covers the first steps on how to get it up and running. It is used in place of a layout so it can be plugged into any log4net appender that takes a layout.
Part of my project I have also created a load testing GUI for log4net. It is not released, but it should compile easily from source. You can use that to discover how different configurations scale in your conditions.
Finally, I'd advise to give LOCALHOST UDP delivery a shot if performance is priority. Projects like nxlog or logstash can swallow that easily. Again, why write new code?
Let me know if you need some clarification. Kind regards and good luck, Rob

Related

EasyNetQ - How to retry failed messages & persist RetryCount in message body/header?

I am using EasyNetQ and need to retry failed messages on the original queue. The problem is: even though I successfully increment the TriedCount variable (in the body of every msg), when EasyNetQ publishes the message to the default error queue after an exception, the updated TriedCount is not in the msg! Presumably because it just dumps the original message to the error queue without the consumer's changes.
The updated TriedCount works for in-process republishes, but not when republished through EasyNetQ Hosepipe or EasyNetQ Management Client. The text files Hosepipe generates do not have the TriedCount updated.
public interface IMsgHandler<T> where T: class, IMessageType
{
Task InvokeMsgCallbackFunc(T msg);
Func<T, Task> MsgCallbackFunc { get; set; }
bool IsTryValid(T msg, string refSubscriptionId); // Calls callback only
// if Retry is valid
}
public interface IMessageType
{
int MsgTypeId { get; }
Dictionary<string, TryInfo> MsgTryInfo {get; set;}
}
public class TryInfo
{
public int TriedCount { get; set; }
/*Other information regarding msg attempt*/
}
public bool SubscribeAsync<T>(Func<T, Task> eventHandler, string subscriptionId)
{
IMsgHandler<T> currMsgHandler = new MsgHandler<T>(eventHandler, subscriptionId);
// Using the msgHandler allows to add a mediator between EasyNetQ and the actual callback function
// The mediator can transmit the retried msg or choose to ignore it
return _defaultBus.SubscribeAsync<T>(subscriptionId, currMsgHandler.InvokeMsgCallbackFunc).Queue != null;
}
I have also tried republishing myself through the Management API (rough code):
var client = new ManagementClient("http://localhost", "guest", "guest");
var vhost = client.GetVhostAsync("/").Result;
var errQueue = client.GetQueueAsync("EasyNetQ_Default_Error_Queue",
vhost).Result;
var crit = new GetMessagesCriteria(long.MaxValue,
Ackmodes.ack_requeue_true);
var errMsgs = client.GetMessagesFromQueueAsync(errQueue,
crit).Result;
foreach (var errMsg in errMsgs)
{
var pubRes = client.PublishAsync(client.GetExchangeAsync(errMsg.Exchange, vhost).Result,
new PublishInfo(errMsg.RoutingKey, errMsg.Payload)).Result;
}
This works but only publishes to the error queue again, not on the original queue. Also, I don't know how to add/update the retry information in the body of the message at this stage.
I have explored this library to add headers to the message but I don't see if the count in the body is not being updated, how/why would the count in the header be updated.
Is there any way to persist the TriedCount without resorting to the Advanced bus (in which case I might use the RabbitMQ .Net client itself)?
Just in case it helps someone else, I eventually implemented my own IErrorMessageSerializer (as opposed to implementing the whole IConsumerErrorStrategy, which seemed like an overkill). The reason I am adding the retry info in the body (instead of the header) is that EasyNetQ doesn't handle complex types in the header (not out-of-the-box anyway). So, using a dictionary gives more control for different consumers. I register the custom serializer at the time of creating the bus like so:
_defaultBus = RabbitHutch.CreateBus(currentConnString, serviceRegister => serviceRegister.Register<IErrorMessageSerializer>(serviceProvider => new RetryEnabledErrorMessageSerializer<IMessageType>(givenSubscriptionId)));
And just implemented the Serialize method like so:
public class RetryEnabledErrorMessageSerializer<T> : IErrorMessageSerializer where T : class, IMessageType
{
public string Serialize(byte[] messageBody)
{
string stringifiedMsgBody = Encoding.UTF8.GetString(messageBody);
var objectifiedMsgBody = JObject.Parse(stringifiedMsgBody);
// Add/update RetryInformation into objectifiedMsgBody here
// I have a dictionary that saves <key:consumerId, val: TryInfoObj>
return JsonConvert.SerializeObject(objectifiedMsgBody);
}
}
The actual retrying is done by a simple console app/windows service periodically via the EasyNetQ Management API:
var client = new ManagementClient(AppConfig.BaseAddress, AppConfig.RabbitUsername, AppConfig.RabbitPassword);
var vhost = client.GetVhostAsync("/").Result;
var aliveRes = client.IsAliveAsync(vhost).Result;
var errQueue = client.GetQueueAsync(Constants.EasyNetQErrorQueueName, vhost).Result;
var crit = new GetMessagesCriteria(long.MaxValue, Ackmodes.ack_requeue_false);
var errMsgs = client.GetMessagesFromQueueAsync(errQueue, crit).Result;
foreach (var errMsg in errMsgs)
{
var innerMsg = JsonConvert.DeserializeObject<Error>(errMsg.Payload);
var pubInfo = new PublishInfo(innerMsg.RoutingKey, innerMsg.Message);
pubInfo.Properties.Add("type", innerMsg.BasicProperties.Type);
pubInfo.Properties.Add("correlation_id", innerMsg.BasicProperties.CorrelationId);
pubInfo.Properties.Add("delivery_mode", innerMsg.BasicProperties.DeliveryMode);
var pubRes = client.PublishAsync(client.GetExchangeAsync(innerMsg.Exchange, vhost).Result,
pubInfo).Result;
}
Whether retry is enabled or not is known by my consumer itself, giving it more control so it can choose to handle the retried msg or just ignore it. Once ignored, the msg will obviously not be tried again; that's how EasyNetQ works.

Can I generate the compile date in my C# code to determine the expiry for a demo version?

I am creating a demonstration version of a C# program and I wish it to expire after a month.
// DEMO - Check date
DateTime expires = new DateTime(2016, 3, 16);
expires.AddMonths(2);
var diff = expires.Subtract(DateTime.Now);
if (diff.Days < 0)
{
MessageBox.Show("Demonstration expired.");
return;
}
I am wanting to have the date the compile instead of the hard coded new DateTime(2016, 3, 16);
Is there a compiler directive to give me the current date? Or am I aproaching this the wrong way?
But pre-processor directives are used during compile-time.
That expiration should be implemented using executable code. The issue here is you can hardcode it and hide it as much as possible, but it avid developers can find it and replace the intermediate language and generate a new assembly without the expiration. Actually, there're many other cases where an user can by-pass the whole expiration...
It seems like your best bet should be creating some kind of unique key, store it in your app and check if the whole key is still valid over the wire connecting to some licensing service developed by you.
An alternative solution to hard-coding a date that also offers some flexibility and extensibility could be to host a license file on a web server. For my sample, I used github. Create a well-known file for the application (possibly one for demo and a new one for beta1, etc.). At startup, and possibly periodically, read the file and parse it to determine applicability, timeouts, disable/enable features (like activating a custom warning message), etc.
Now you can ship your demo, put the expire date in the file, change it if needed, etc. This is not the most elegant nor secure solution, but for many use cases for a demo/beta, this might be enough to serve its intended purpose.
Below is a working mock-up of how this might look (omitted error checking and proper cleanup for brevity):
public class LicenseInfo
{
public string Info1 { get; private set; }
public bool IsValid
{
get
{
// todo, add logic here
return true;
}
}
public bool ParseLicense(string data)
{
bool ret = false;
if (data != null)
{
// todo, parse data and set status/attributes/etc
Info1 = data;
ret = true;
}
return ret;
}
}
// could make a static class...
public class License
{
public LicenseInfo GetLicenseInfo()
{
var license = new LicenseInfo();
// todo: create whatever schema you want.
// filename hard-coded per app/version/etc.
// file could contain text/json/etc.
// easy to manage, update, etc.
// extensible.
var uri = "https://raw.githubusercontent.com/korygill/Demo-License/master/StackOverflow-Demo-License.txt";
var request = (HttpWebRequest)HttpWebRequest.Create(uri);
var response = request.GetResponse();
var data = new StreamReader(response.GetResponseStream()).ReadToEnd();
license.ParseLicense(data);
return license;
}
}
class Program
{
static void Main(string[] args)
{
// check if our license if valid
var license = new License();
var licenseInfo = license.GetLicenseInfo();
if (!licenseInfo.IsValid)
{
Console.WriteLine("Sorry...license expired.");
Environment.Exit(1);
}
Console.WriteLine("You have a valid license.");
Console.WriteLine($"{licenseInfo.Info1}");
}
}

Filter Change Notifications in Active Directory: Create, Delete, Undelete

I am currently using the Change Notifications in Active Directory Domain Services in .NET as described in this blog. This will return all events that happen on an selected object (or in the subtree of that object). I now want to filter the list of events for creation and deletion (and maybe undeletion) events.
I would like to tell the ChangeNotifier class to only observe create-/delete-/undelete-events. The other solution is to receive all events and filter them on my side. I know that in case of the deletion of an object, the atribute list that is returned will contain the attribute isDeleted with the value True. But is there a way to see if the event represents the creation of an object? In my tests the value for usnchanged is always usncreated+1 in case of userobjects and both are equal for OUs, but can this be assured in high-frequency ADs? It is also possible to compare the changed and modified timestamp. And how can I tell if an object has been undeleted?
Just for the record, here is the main part of the code from the blog:
public class ChangeNotifier : IDisposable
{
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request
);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set; }
}
I participated in a design review about five years back on a project that started out using AD change notification. Very similar questions to yours were asked. I can share what I remember, and don't think things have change much since then. We ended up switching to DirSync.
It didn't seem possible to get just creates & deletes from AD change notifications. We found change notification resulted enough events monitoring a large directory that notification processing could bottleneck and fall behind. This API is not designed for scale, but as I recall the performance/latency were not the primary reason we switched.
Yes, the usn relationship for new objects generally holds, although I think there are multi-dc scenarios where you can get usncreated == usnchanged for a new user, but we didn't test that extensively, because...
The important thing for us was that change notification only gives you reliable object creation detection under the unrealistic assumption that your machine is up 100% of the time! In production systems there are always some case where you need to reboot and catch up or re-synchronize, and we switched to DirSync because it has a robust way to handle those scenarios.
In our case it could block email to a new user for an indeterminate time if an object create were missed. That obviously wouldn't be good, we needed to be sure. For AD change notifications, getting that resync right that would have some more work and hard to test. But for DirSync, its more natural, and there's a fast-path resume mechanism that usually avoids resync. For safety I think we triggered a full re-synchronize every day.
DirSync is not as real-time as change notification, but its possible to get ~30-second average latency by issuing the DirSync query once a minute.

Unit Testing Amazon S3

I have a fairly simple class that I'm trying to unit test. I'm very new to unit testing in general, and I'm not sure what I should be testing here.
The only test case that I can figure out how to code is a null argument of stream. Besides that, I'm not sure how to test the results of a PutObjectRequest or what else. If I should be using mocks here, how?
public class AmazonS3Service : IAmazonS3Service
{
private readonly Uri baseImageUrl;
private readonly Uri s3BaseUrl;
private readonly string imageBucket;
public AmazonS3Service()
{
imageBucket = ConfigurationManager.AppSettings["S3.Buckets.Images"];
s3BaseUrl = new Uri(ConfigurationManager.AppSettings["S3.BaseAddress"]);
baseImageUrl = new Uri(s3BaseUrl, imageBucket);
}
public Image UploadImage(Stream stream)
{
if (stream == null) throw new ArgumentNullException("stream");
var key = string.Format("{0}.jpg", Guid.NewGuid());
var request = new PutObjectRequest
{
CannedACL = S3CannedACL.PublicRead,
Timeout = -1,
ReadWriteTimeout = 600000, // 10 minutes * 60 seconds * 1000 milliseconds
InputStream = stream,
BucketName = imageBucket,
Key = key
};
using (var client = new AmazonS3Client())
{
using (client.PutObject(request))
{
}
}
return new Image
{
UriString = Path.Combine(baseImageUrl.AbsoluteUri, key)
};
}
}
You are having trouble unit testing UploadImage because it is coupled to many other external services and state. Static calls including (new) tightly couple the code to specific implementations. Your goal should be to refactor those so that you can more easily unit test. Also, keep in mind that after unit testing this class, you will still need to do the big tests involving actually using the Amazon S3 service and making sure the upload happened correctly without error or fails as expected. By unit testing thoroughly, hopefully you reduce the number of these big and possibly expensive tests.
Removing the coupling to the AmazonS3Client implementation is probably going to give you the biggest bang for your testing buck. We need to refactor by pulling out the new AmazonS3Client call. If there is not already an interface for this class, then I would create one to wrap it. Then you need to decide how to inject the implementation. There are a number of options, including as a method parameter, constructor parameter, property, or a factory.
Let's use the factory approach because it is more interesting than the others, which are straight-forward. I've left out some of the details for clarity and read-ability.
interface IClientFactory
{
IAmazonS3Client CreateAmazonClient();
}
interface IAmazonS3Client
{
PutObjectResponse PutObject(PutObjectRequest request); // I'm guessing here for the signature.
}
public class AmazonS3Service : IAmazonS3Service
{
// snip
private IClientFactory factory;
public AmazonS3Service(IClientFactory factory)
{
// snip
this.factory = factory;
}
public Image UploadImage(Stream stream)
{
if (stream == null) throw new ArgumentNullException("stream");
var key = string.Format("{0}.jpg", Guid.NewGuid());
var request = new PutObjectRequest
{
CannedACL = S3CannedACL.PublicRead,
Timeout = -1,
ReadWriteTimeout = 600000, // 10 minutes * 60 seconds * 1000 milliseconds
InputStream = stream,
BucketName = imageBucket,
Key = key
};
// call the factory to provide us with a client.
using (var client = factory.CreateAmazonClient())
{
using (client.PutObject(request))
{
}
}
return new Image
{
UriString = Path.Combine(baseImageUrl.AbsoluteUri, key)
};
}
}
A unit test might look like this in MSTest:
[TestMethod]
public void InputStreamSetOnPutObjectRequest()
{
var factory = new TestFactory();
var service = new AmazonS3Service(factory);
using (var stream = new MemoryStream())
{
service.UploadImage(stream);
Assert.AreEqual(stream, factory.TestClient.Request.InputStream);
}
}
class TestFactory : IClientFactory
{
public TestClient TestClient = new TestClient();
public IAmazonS3Client CreateClient()
{
return TestClient;
}
}
class TestClient : IAmazonS3Client
{
public PutObjectRequest Request;
public PutObjectResponse Response;
public PutObjectResponse PutObject(PutObjectRequest request)
{
Request = request;
return Response;
}
}
Now, we have one test verifying that the correct input stream is sent over in the request object. Obviously, a mocking framework would help cut down on a lot of boilerplate code for testing this behavior. You could expand this by starting to write tests for the other properties on the request object. Error cases are where unit testing can really shine because often they can be difficult or impossible to induce in production implementation classes.
To fully unit test other scenarios of this method/class, there are other external dependencies here that would need to be passed in or mocked. The ConfigurationManager directly accesses the config file. Those settings should be passed in. Guid.NewGuid is basically a source of uncontrolled randomness which is also bad for unit testing. You could define an IKeySource to be a provider of key values to various services and mock it or just have the key passed from the outside.
Finally, you should be weighing all the time taken for testing/refactoring against how much value it is giving you. More layers can always be added to isolate more and more components, but there are diminishing returns for each added layer.
Things I would look at:
Mock your configuration manager to return invalid data for the bucket and the URL. (null, invalid urls, invalid buckets)
Does S3 support https ? If so mock it, if not, mock it and verify you get a valid error.
Pass different kinds of streams in (Memory, File, other types).
Pass in streams in different states (Empty streams, streams that have been read to the
end, ...)
I would allow the timeouts to be set as parameters, so you can test with really low
timeouts and see what errors you get back.
I would also test with duplicate keys, just to verify the error message. Even though you are using guids, you are storing to an amazon server where someone else could use the S3 API to store documents and could theoretically create a file that appears to be a guid, but could create a conflict down the road (unlikely, but possible)

Is this a good/preferable pattern to Azure Queue construction for a T4 template?

I'm building a T4 template that will help people construct Azure queues in a consistent and simple manner. I'd like to make this self-documenting, and somewhat consistent.
First I made the queue name at the top of the file, the queue names have to be in lowercase so I added ToLower()
The public constructor uses the built-in StorageClient API's to access the connection strings. I've seen many different approaches to this, and would like to get something that works in almost all situations. (ideas? do share)
I dislike the unneeded HTTP requests to check if the queues have been created so I made is a static bool . I didn't implement a Lock(monitorObject) since I don't think one is needed.
Instead of using a string and parsing it with commas (like most MSDN documentation) I'm serializing the object when passing it into the queue.
For further optimization I'm using a JSON serializer extension method to get the most out of the 8k limit. Not sure if an encoding will help optimize this any more
Added retry logic to handle certain scenarios that occur with the queue (see html link)
Q: Is "DataContext" appropriate name for this class?
Q: Is it a poor practice to name the Queue Action Name in the manner I have done?
What additional changes do you think I should make?
public class AgentQueueDataContext
{
// Queue names must always be in lowercase
// Is named like a const, but isn't one because .ToLower won't compile...
static string AGENT_QUEUE_ACTION_NAME = "AgentQueueActions".ToLower();
static bool QueuesWereCreated { get; set; }
DataModel.SecretDataSource secDataSource = null;
CloudStorageAccount cloudStorageAccount = null;
CloudQueueClient cloudQueueClient = null;
CloudQueue queueAgentQueueActions = null;
static AgentQueueDataContext()
{
QueuesWereCreated = false;
}
public AgentQueueDataContext() : this(false)
{
}
public AgentQueueDataContext(bool CreateQueues)
{
// This pattern of setting up queues is from:
// ttp://convective.wordpress.com/2009/11/15/queues-azure-storage-client-v1-0/
//
this.cloudStorageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
this.cloudQueueClient = cloudStorageAccount.CreateCloudQueueClient();
this.secDataSource = new DataModel.SecretDataSource();
queueAgentQueueActions = cloudQueueClient.GetQueueReference(AGENT_QUEUE_ACTION_NAME);
if (QueuesWereCreated == false || CreateQueues)
{
queueAgentQueueActions.CreateIfNotExist();
QueuesWereCreated = true;
}
}
// This is the method that will be spawned using ThreadStart
public void CheckQueue()
{
while (true)
{
try
{
CloudQueueMessage msg = queueAgentQueueActions.GetMessage();
bool DoRetryDelayLogic = false;
if (msg != null)
{
// Deserialize using JSON (allows more data to be stored)
AgentQueueEntry actionableMessage = msg.AsString.FromJSONString<AgentQueueEntry>();
switch (actionableMessage.ActionType)
{
case AgentQueueActionEnum.EnrollNew:
{
// Add to
break;
}
case AgentQueueActionEnum.LinkToSite:
{
// Link within Agent itself
// Link within Site
break;
}
case AgentQueueActionEnum.DisableKey:
{
// Disable key in site
// Disable key in AgentTable (update modification time)
break;
}
default:
{
break;
}
}
//
// Only delete the message if the requested agent has been missing for
// at least 10 minutes
//
if (DoRetryDelayLogic)
{
if (msg.InsertionTime != null)
if (msg.InsertionTime < DateTime.UtcNow + new TimeSpan(0, 10, 10))
continue;
// ToDo: Log error: AgentID xxx has not been found in table for xxx minutes.
// It is likely the result of a the registratoin host crashing.
// Data is still consistent. Deleting queued message.
}
//
// If execution made it to this point, then we are either fully processed, or
// there is sufficent reason to discard the message.
//
try
{
queueAgentQueueActions.DeleteMessage(msg);
}
catch (StorageClientException ex)
{
// As of July 2010, this is the best way to detect this class of exception
// Description: ttp://blog.smarx.com/posts/deleting-windows-azure-queue-messages-handling-exceptions
if (ex.ExtendedErrorInformation.ErrorCode == "MessageNotFound")
{
// pop receipt must be invalid
// ignore or log (so we can tune the visibility timeout)
}
else
{
// not the error we were expecting
throw;
}
}
}
else
{
// allow control to fall to the bottom, where the sleep timer is...
}
}
catch (Exception e)
{
// Justification: Thread must not fail.
//Todo: Log this exception
// allow control to fall to the bottom, where the sleep timer is...
// Rationale: not doing so may cause queue thrashing on a specific corrupt entry
}
// todo: Thread.Sleep() is bad
// Replace with something better...
Thread.Sleep(9000);
}
Q: Is "DataContext" appropriate name for this class?
In .NET we have a lot of DataContext classes, so in the sense that you want names to appropriately communicate what the class does, I think XyzQueueDataContext properly communicates what the class does - although you can't query from it.
If you want to stay more aligned to accepted pattern languages, Patterns of Enterprise Application Architecture calls any class that encapsulates access to an external system for a Gateway, while more specifically you may want to use the term Channel in the language of Enterprise Integration Patterns - that's what I would do.
Q: Is it a poor practice to name the Queue Action Name in the manner I have done?
Well, it certainly tightly couples the queue name to the class. This means that if you later decide that you want to decouple those, you can't.
As a general comment I think this class might benefit from trying to do less. Using the queue is not the same thing as managing it, so instead of having all of that queue management code there, I'd suggest injecting a CloudQueue into the instance. Here's how I implement my AzureChannel constructor:
private readonly CloudQueue queue;
public AzureChannel(CloudQueue queue)
{
if (queue == null)
{
throw new ArgumentNullException("queue");
}
this.queue = queue;
}
This better fits the Single Responsibility Principle and you can now implement queue management in its own (reusable) class.

Categories