I would like to find a good design pattern on how to implement this example business workflow. Instead of using one giant monolithic procedural-like method call, I was thinking I would like to use a fluent method chaining -- basically, a simple workflow pipeline without using one of those workflow or BPM frameworks. Suggestions on best practice, perhaps a known design pattern?
My Example
get configuration / user preferences
validate config/preferences
look up / standardize additional config/preferences
get report 1 (with above input)
get report 2, etc.
email reports
The inputs/user preferences causes a lot of if/else logic, so I don't want to have my method have to contain all my if/else logic to see if each step was successful or not, and handle. (i.e. I do NOT want)
myOutput1 = CallMethod1(param1, param2, our errorMsg)
if (error)
{ // do something, then break }
myOutput2 = CallMethod2(param1, param2, our errorMsg)
if (error)
{ // do something, then break }
...
myOutput9 = CallMethod9(param1, param2, our errorMsg)
if (error)
{ // do something, then break }
Sample Idea Pipeline code
Perhaps something like this? Would it work? How can I improve upon it?
public class Reporter
{
private ReportSettings Settings {get; set;}
private ReportResponse Response {get; set;}
public ReportResponse GenerateAndSendReports(string groupName)
{
ReportResponse response = this.GetInputConfiguration()
.ValidateConfiguration()
.StandardizeConfiguration(groupName)
.PopulateReport1()
.PopulateReport2()
.PopulateReport99()
.EmailReports()
.Output();
return response;
}
public Reporter GetInputConfiguration()
{
this.Response = new ReportResponse();
this.Settings = new ReportSetting();
this.Settings.IsReport1Enabled = ConfigurationManager.GetSetting("EnableReport1");
this.Settings.Blah1 = ConfigurationManager.GetSetting("Blah1");
this.Settings.Blah2 = ConfigurationManager.GetSetting("Blah2");
return this;
}
public Reporter StandardizeConfiguration(string groupName)
{
this.Settings.Emails = myDataService.GetEmails(groupName);
return this;
}
public Reporter PopulateReport1()
{
if (!this.Setting.HasError && this.Settings.IsReport1Enabled)
{
try
{
this.Response.Report1Content = myReportService.GetReport1(this.Settings.Blah1, this.Blah2)
}
catch (Exception ex)
{
this.Response.HasError = true;
this.Response.Message = ex.ToString();
}
}
return this;
}
}
I was thinking of something like this
You are mentioning two distinct concepts: fluent mechanism, and the pipeline (or chain of responsibility) pattern.
Pipeline Pattern
Must define an interface IPipeline which contains DoThings();.
The implementations of IPipeline must contain an IPipeline GetNext();
Fluent
All actions must return a reference to the object modified by the action: IFluent.
If you which to better control what options are available and when in your workflow, you could have the Fluent actions returning distinct interfaces: for example IValidatedData could expose IStandardizedData Standardize(), and IStandardizedData could expose IStandardizedData PopulateReport(var param) and IStandardizedData PopulateEmail(var param). This is what LINQ does with enumerables, lists, etc.
However, in your example it looks like you are mostly looking for a Fluent mechanism. The pipeline pattern helps with data flows (HTTP request handlers, for example). In your case you are just applying properties to a single object Reporter, so the pipeline pattern does not really apply.
For those who ended here because they are looking for a two-way (push and pull) fluent pipeline, you want your fluent actions to build the pipeline, by returning an IPipelineStep. The behaviour of the pipeline is defined by the implementation of each IPipelineStep.
You can achieve this as follows:
PipelineStep implements IPipelineStep
PipelineStep contains a private IPipelineStep NextStep(get;set);
IPipelineBuilder contains the fluent actions available to build the pipeline.
Your fluent actions return a concretion which implement both IPipelineStep and IPipelineBuilder.
Before returning, the fluent action updates this.NextStep.
IPipelineStep contains a var Push(var input); and var Pull(var input);
Push does things and then calls this.NextStep.Push
Pull calls this.NextStep.Pull and then does things and returns
You also need to consider how you want to use the pipeline once built: from top to bottom or the other way around.
I know this is an old question, but this is a pretty recent video explaining how to build a nice Fluent API. One thing mentioned, that I think is great, is the idea of using interfaces to enforce the correct order to call the APIs.
https://www.youtube.com/watch?v=1JAdZul-aRQ
Related
I'm using Azure DocumentDb/CosmoDB and i've got a really kludgy pattern i need to get out of.
I have a WebAPI controller that I return my objects from based on an ID. I also have a repository that will query the DocumentDB depending on . The problem is, I've got about 30 different types (and growing) and i have to do something like this:
public async Task<HttpResponseMessage> GetWidget(String id, String widgetType)
{
if (widgetType.Equals("WidgetA"))
{
DocumentDbRepository<WidgetA> repo = new DocumentDbRepository<WidgetA>();
var widget = await repo.GetItemAsync(id);
return (ControllerContext.Request.CreateResponse(HttpStatusCode.OK, widget));
}
else if (resourceBase.SystemType.Equals("WidgetB"))
{
DocumentDbRepository<WidgetB> repo = new DocumentDbRepository<WidgetB>();
var widget = await repo.GetItemAsync(id);
return (ControllerContext.Request.CreateResponse(HttpStatusCode.OK, widget));
}
...
}
I know, this is terrible. I'm thinking of using reflection or maybe even some T4 templates to generate all this but I'm thinking there must be a cleaner way?
thanks in advance
We're using DI and Unity to work with different dependencies (generally, database and repository classes, dto to entity mappers, etc)
Right now we're trying to create smaller functions that perform tasks that try to be independend from each other, in order to increase testability and also to avoid methods that have lots of different responsibilities, to avoid coupling
One question that I have is, how should DI be used when we have methods that rely on other inner methods, when the nesting is not trivial. For example, consider the following example (just a concept, not real working code):
public ProcessedOrder ProcessOrders(Order inputOrder)
{
foreach (var line in inputOrder.OrderLines)
{
var someData = LineProcessor(line);
}
}
public SomeData LineProcessor(OrderLine line)
{
/* do some stuff*/
var OtherData = ThingProcessor(null,line.SomeStuff);
var ret = new SomeData();
// assign values, more stuff
return ret;
}
public OtherData ThingProcessor(IDep1 someDependency, SomeStuff stuff)
{
someDependency = someDependency ?? ServiceLocator.Resolve<IDep1>();
var ret = someDependency.DoThings(stuff);
return ret;
}
Ok, so far the example shows that we have 3 functions, that could theoretically be called on their own. there's some injected dependency in the ThingProcessor, and if it's null then it tries to resolve it.
However, this is a somehow simple example, but I see something I don't like so much. For instance, I'm calling ThingProcessor with a null value in the first param. So I can say, ok, I modify the signature of LineProcessor to have it injected, so that he pass it in to the other function that needs it. However, he really doesn't need it, it's not its dependency but the function that he's calling.
So here I don't know what approach is the more correct one, if the one that i'm showing, or if I should pass the correlated dependencies across layers. If I do this last thing, then the outermost function will be a mess, because it'll have a long list of dependencies that it will feed to everyone that's below it.
However, the "null approach" I don't like very much, so I'm pretty sure that something's wrong somewhere, and there's probably a better way to design this.
What's the best approach??? Remember, all functions must be used independently (called on their own), so for example I may call just ThingProcessor at some point, or at another one only LineProcessor.
UPDATE :
public CommonPurposeFunctions(IDep1 dep1, IDep2 dep2 ....)
{
this.Dep1 = dep1;
this.Dep2 = dep2;
[...]
}
public ProcessedOrder ProcessOrders(Order inputOrder)
{
foreach (var line in inputOrder.OrderLines)
{
var someData = LineProcessor(line);
}
}
public SomeData LineProcessor(OrderLine line)
{
/* do some stuff*/
var OtherData = ThingProcessor(line.SomeStuff);
var ret = new SomeData();
var morethings = this.Dep2.DoMoreThings();
// assign values, more stuff
return ret;
}
public OtherData ThingProcessor(SomeStuff stuff)
{
var ret = this.Dep1.DoThings(stuff);
return ret;
}
The approach we use is constructor injection, then we store the dependency in a private member field. The container wires up the dependencies; so the number of classes and constructor parameters doesn't really matter.
This works for services. If the dependencies across calls have meaningful state, you will have to pass them in to each call. But, in that case, I'd question if the methods really need to be public methods in their own classes.
You want to end up with a design that eliminates the service locator and truly injects the dependencies.
Does the null object pattern help?
http://en.wikipedia.org/wiki/Null_Object_pattern
Currently I have a custom built static logging class in C# that can be called with the following code:
EventLogger.Log(EventLogger.EventType.Application, string.Format("AddData request from {0}", ipAddress));
When this is called it simply writes to a defined log file specified in a configuration file.
However, being that I have to log many, many events, my code is starting to become hard to read because all of the logging messages.
Is there an established way to more or less separate logging code from objects and methods in a C# class so code doesn't become unruly?
Thank you all in advance for your help as this is something I have been struggling with lately.
I like the AOP Features, that PostSharp offers. In my opinion Loggin is an aspect of any kind of software. Logging isn't the main value an application should provide.
So in my case, PostSharp always was fine. Spring.NET has also an AOP module which could be used to achieve this.
The most commonly used technique I have seen employs AOP in one form or another.
PostSharp is one product that does IL weaving as a form of AOP, though not the only way to do AOP in .NET.
A solution to this is to use Aspect-oriented programming in which you can separate these concerns. This is a pretty complex/invasive change though, so I'm not sure if it's feasible in your situation.
I used to have a custom built logger but recently changed to TracerX. This provides a simple way to instrument the code with different levels of severity. Loggers can be created with names closely related to the class etc that you are working with
It has a separate Viewer with a lot of filtering capabilities including logger, severity and so on.
http://tracerx.codeplex.com/
There is an article on it here: http://www.codeproject.com/KB/dotnet/TracerX.aspx
If your primary goal is to log function entry/exit points and occasional information in between, I've had good results with an Disposable logging object where the constructor traces the function entry, and Dispose() traces the exit. This allows calling code to simply wrap each method's code inside a single using statement. Methods are also provided for arbitrary logs in between. Here is a complete C# ETW event tracing class along with a function entry/exit wrapper:
using System;
using System.Diagnostics;
using System.Diagnostics.Tracing;
using System.Reflection;
using System.Runtime.CompilerServices;
namespace MyExample
{
// This class traces function entry/exit
// Constructor is used to automatically log function entry.
// Dispose is used to automatically log function exit.
// use "using(FnTraceWrap x = new FnTraceWrap()){ function code }" pattern for function entry/exit tracing
public class FnTraceWrap : IDisposable
{
string methodName;
string className;
private bool _disposed = false;
public FnTraceWrap()
{
StackFrame frame;
MethodBase method;
frame = new StackFrame(1);
method = frame.GetMethod();
this.methodName = method.Name;
this.className = method.DeclaringType.Name;
MyEventSourceClass.Log.TraceEnter(this.className, this.methodName);
}
public void TraceMessage(string format, params object[] args)
{
string message = String.Format(format, args);
MyEventSourceClass.Log.TraceMessage(message);
}
public void Dispose()
{
if (!this._disposed)
{
this._disposed = true;
MyEventSourceClass.Log.TraceExit(this.className, this.methodName);
}
}
}
[EventSource(Name = "MyEventSource")]
sealed class MyEventSourceClass : EventSource
{
// Global singleton instance
public static MyEventSourceClass Log = new MyEventSourceClass();
private MyEventSourceClass()
{
}
[Event(1, Opcode = EventOpcode.Info, Level = EventLevel.Informational)]
public void TraceMessage(string message)
{
WriteEvent(1, message);
}
[Event(2, Message = "{0}({1}) - {2}: {3}", Opcode = EventOpcode.Info, Level = EventLevel.Informational)]
public void TraceCodeLine([CallerFilePath] string filePath = "",
[CallerLineNumber] int line = 0,
[CallerMemberName] string memberName = "", string message = "")
{
WriteEvent(2, filePath, line, memberName, message);
}
// Function-level entry and exit tracing
[Event(3, Message = "Entering {0}.{1}", Opcode = EventOpcode.Start, Level = EventLevel.Informational)]
public void TraceEnter(string className, string methodName)
{
WriteEvent(3, className, methodName);
}
[Event(4, Message = "Exiting {0}.{1}", Opcode = EventOpcode.Stop, Level = EventLevel.Informational)]
public void TraceExit(string className, string methodName)
{
WriteEvent(4, className, methodName);
}
}
}
Code that uses it will look something like this:
public void DoWork(string foo)
{
using (FnTraceWrap fnTrace = new FnTraceWrap())
{
fnTrace.TraceMessage("Doing work on {0}.", foo);
/*
code ...
*/
}
}
To make the code readable, only log what you really need to (info/warning/error). Log debug messages during development, but remove most when you are finished. For trace logging, use
AOP to log simple things like method entry/exit (if you feel you need that kind of granularity).
Example:
public int SomeMethod(int arg)
{
Log.Trace("SomeClass.SomeMethod({0}), entering",arg); // A
if (arg < 0)
{
arg = -arg;
Log.Warn("Negative arg {0} was corrected", arg); // B
}
Log.Trace("SomeClass.SomeMethod({0}), returning.",arg); // C
return 2*arg;
}
In this example, the only necessary log statement is B. The log statements A and C are boilerplate, logging that you can leave to PostSharp to insert for you instead.
Also: in your example you can see that there is some form of "Action X invoked by Y", which suggests that a lot of your code could in fact be moved up to a higher level (e.g. Command/Filter).
Your proliferation of logging statements could be telling you something: that some form of design pattern could be used, which could also centralize a lot of the logging.
void DoSomething(Command command, User user)
{
Log.Info("Command {0} invoked by {1}", command, user);
command.Process(user);
}
I think it is a good option to implement something similar to filters in ASP.NET MVC. This is implement with the help of attributes and reflection. You mark every method you want to log in a certain way and enjoy. I suppose there might be a better way to do it, may be with the help of Observer pattern or something but as long as I thought about it I couldn't think of something better.
Basically such problems are called cross-cutting concerns and can be tackled with the help of AOP.
I also think that some interesting inheritance schema can be applied with log entities at the base but I would go for filters
Maybe this is dreaming, but is it possible to create an attribute that caches the output of a function (say, in HttpRuntime.Cache) and returns the value from the cache instead of actually executing the function when the parameters to the function are the same?
When I say function, I'm talking about any function, whether it fetches data from a DB, whether it adds two integers, or whether it spits out the content of a file. Any function.
Your best bet is Postsharp. I have no idea if they have what you need, but that's certainly worth checking. By the way, make sure to publish the answer here if you find one.
EDIT: also, googling "postsharp caching" gives some links, like this one: Caching with C#, AOP and PostSharp
UPDATE: I recently stumbled upon this article: Introducing Attribute Based Caching. It describes a postsharp-based library on http://cache.codeplex.com/ if you are still looking for a solution.
I have just the same problem - I have multiply expensive methods in my app and it is necessary for me to cache those results. Some time ago I just copy-pasted similar code but then I decided to factor this logic out of my domain.
This is how I did it before:
static List<News> _topNews = null;
static DateTime _topNewsLastUpdateTime = DateTime.MinValue;
const int CacheTime = 5; // In minutes
public IList<News> GetTopNews()
{
if (_topNewsLastUpdateTime.AddMinutes(CacheTime) < DateTime.Now)
{
_topNews = GetList(TopNewsCount);
}
return _topNews;
}
And that is how I can write it now:
public IList<News> GetTopNews()
{
return Cacher.GetFromCache(() => GetList(TopNewsCount));
}
Cacher - is a simple helper class, here it is:
public static class Cacher
{
const int CacheTime = 5; // In minutes
static Dictionary<long, CacheItem> _cachedResults = new Dictionary<long, CacheItem>();
public static T GetFromCache<T>(Func<T> action)
{
long code = action.GetHashCode();
if (!_cachedResults.ContainsKey(code))
{
lock (_cachedResults)
{
if (!_cachedResults.ContainsKey(code))
{
_cachedResults.Add(code, new CacheItem { LastUpdateTime = DateTime.MinValue });
}
}
}
CacheItem item = _cachedResults[code];
if (item.LastUpdateTime.AddMinutes(CacheTime) >= DateTime.Now)
{
return (T)item.Result;
}
T result = action();
_cachedResults[code] = new CacheItem
{
LastUpdateTime = DateTime.Now,
Result = result
};
return result;
}
}
class CacheItem
{
public DateTime LastUpdateTime { get; set; }
public object Result { get; set; }
}
A few words about Cacher. You might notice that I don't use Monitor.Enter() ( lock(...) ) while computing results. It's because copying CacheItem pointer ( return (T)_cachedResults[code].Result; line) is thread safe operation - it is performed by only one stroke. Also it is ok if more than one thread will change this pointer at the same time - they all will be valid.
You could add a dictionary to your class using a comma separated string including the function name as the key, and the result as the value. Then when your functions can check the dictionary for the existence of that value. Save the dictionary in the cache so that it exists for all users.
PostSharp is your one stop shop for this if you want to create a [Cache] attribute (or similar) that you can stick on any method anywhere. Previously when I used PostSharp I could never get past how slow it made my builds (this was back in 2007ish, so this might not be relevant anymore).
An alternate solution is to look into using Render.Partial with ASP.NET MVC in combination with OutputCaching. This is a great solution for serving html for widgets / page regions.
Another solution that would be with MVC would be to implement your [Cache] attribute as an ActionFilterAttribute. This would allow you to take a controller method and tag it to be cached. It would only work for controller methods since the AOP magic only can occur with the ActionFilterAttributes during the MVC pipeline.
Implementing AOP through ActionFilterAttribute has evolved to be the goto solution for my shop.
AFAIK, frankly, no.
But this would be quite an undertaking to implement within the framework in order for it to work generically for everybody in all circumstances, anyway - you could, however, tailor something quite sufficient to needs by simply (where simplicity is relative to needs, obviously) using abstraction, inheritance and the existing ASP.NET Cache.
If you don't need attribute configuration but accept code configuration, maybe MbCache is what you're looking for?
I have a class library that wraps the command line client for Mercurial.
My intention is to implement support for all the built-in commands, but in addition to those, there's a ton of extensions out there.
So I need to make my library extendable in the sense that both others and I can add support for extensions. I plan on adding support for some of the more popular and typical extensions (at the very least quite a few of those that come bundled with Mercurial), but I still want to be able to extend it from the outside.
At the moment, the syntax of a command look like this:
Repo.Execute(new CommitCommand
{
Message = "Your commit message",
AddRemove = true,
});
This, however, doesn't lend itself very easily to extensions, without the programmer feeling that the extension is just a tacked on part.
For instance, let's assume I expose a public collection of additional command line arguments, to that I could manually do this:
var cmd = new CommitCommand
{
Message = "Your commit message",
AddRemove = true,
};
cmd.Arguments.Add("--my-commit-extension");
Repo.Execute(cmd);
There seems to be no easy way for me to get that additional extension added in such a way that it can be set as part of the object initializer.
I've been thinking of adding, or perhaps switching, to a fluent interface syntax. In this case, you could write something like this:
Repo.Execute(new CommitCommand()
.Message("Your commit message")
.AddRemove()
.MyCommitExtension());
However, I see people don't like fluent interfaces, they feel they become too chatty.
What other options do I have?
What I want, basically:
One common syntax style
For both built-in things
As well as extensions added by users of my library
I envision that users of my library would extend it by adding new classes, and extension methods to get intellisense support, but extension methods can't be used in object initializers, which means that all extensions look like some afterthought. That's not what I want.
Any ideas are welcome.
I'm not familier with Mercurial and your question seems too general to address specifically, but I can address one particular comment.
var cmd = new CommitCommand
{
Message = "Your commit message",
AddRemove = true,
};
cmd.Arguments.Add("--my-commit-extension");
Repo.Execute(cmd);
If CommitCommand.Arguments is IList<T> you already have the ability to use initializer syntax:
class CommitCommand
{
public string Message { get; set; }
public bool AddRemove { get; set; }
public List<string> Arguments = new List<string>();
}
Repo.Execute(new CommitCommand
{
Message = "Your commit message",
AddRemove = true,
Arguments = { "--my-commit-extension", "--my-other-commit-extension" }
});
I too am not that familiar with Mercurial, but one option is to throw intellisense out and use anonymous types:
Repo.Execute(new
{
Message = "Your commit message",
AddRemove = true,
MyCommitExtension = null
});
You cannot use hyphens in property names, so you'd need to substitute from PascalCase to hyphen-lower-case. Alternatively you could just replace underscores with hypens.
I'm not sure I'd recommend this approach without knowing more about the frequency with which people are going to use these extensions. This approach works well in the ASP.NET MVC framework when dealing with HTML attributes, but that scenario is different in that the framework doesn't need to handle the values, it just writes them into the output directly. If you are taking conditional actions based upon the values provided in this way, or you want your API to be more discoverable to those who don't know Mercurial's command line syntax, then you might try another approach.
As for me fluent interface is ok. But if you want to avoid it, then I would probably use smth like this:
interface IExtension { ... }
class SomeExtension1 : IExtension { ... }
class SomeExtension2 : IExtension { ... }
class CommitCommand
{
public string Message;
public bool AddRemove;
public readonly IList<IExtension> Extensions = new List<IExtension>();
}
It will allow to use commands in this way:
new CommitCommand
{
Message = "",
AddRemove = true,
Extensions = {new SomeExtension1(), new SomeExtension2()}
};