Is there any built-in possibility to tell WCF exact order in which I want my custom operation invokers to be executed?
Some background: I have several custom operation invokers in WCF and each of them performs one task, like:
Set active user
Check for access rights
Set culture information
etc.
Order is very important, because I need first determine user and only then check rights.
Is there any built-in possibility to tell WCF exact order?
No. There’s no WCF interface to for that.
Can you control the order through configuration?
Yes. The order of execution of the different IOperationInvoker’s is predictable and controllable through configuration. You could use this to meet your requirements.
IOperationInvoker Background
Carlos Figueira’s blog: WCF Extensibility – IOperationInvoker gives an example of a custom invoker. Probably too much information, but it shows how multiple invokers chain together and how they are configured and applied to the operation through a WCF behavior.
My point is: OperationInvoker's are interceptors. Each time a new invoker is added to the operation, the previous one is stored.
In other words from the example the behavior that applies the invoker looks like this:
public class CacheableOperationAttribute : Attribute, IOperationBehavior
{
// omitting lots of code...
public void ApplyDispatchBehavior(OperationDescription operationDescription, DispatchOperation dispatchOperation)
{
dispatchOperation.Invoker = new CachingOperationInvoker(dispatchOperation.Invoker, this.secondsToCache);
}
}
And the invoker stores the previous invoker:
public class CachingOperationInvoker : IOperationInvoker
{
// omitting lots of code...
public CachingOperationInvoker(IOperationInvoker originalInvoker, double cacheDuration)
{
this.originalInvoker = originalInvoker;
this.cacheDuration = cacheDuration;
}
}
Then the invoker method looks like this:
public object Invoke(object instance, object[] inputs, out object[] outputs)
{
// do this invokers work before others?...
// at some point call the next invoker
object result = this.originalInvoker.Invoke(instance, inputs, out outputs);
// do this invokers work after others?...
return result;
}
}
Note: You need to know the invoker implementation (when it calls the next invoker in the stack) to fully understand how multiple invokers sequence. There's no rules or conventions on this (for good reason).
Configuration
WCF behaviors can be added to an operation in a couple different places: code, config file, etc. So many options can cause confusion (bugs) for your use case.
If your OperationInvokers are tightly coupled, my suggestion would be to create a single custom behavior that adds all the IOperationInvokers in the right order.
So while you can't "tell" WCF the execution order, you can predictably control it.
Related
I want to write my own Logging classes (in C#) which implement a standard interface, which I can call from any part of the code.
My idea is to have multiple Log classes implement the Logger interface, each for its specific log destination, for example, a FileLogger will implement logging to a file, a TextBox logger will implement logging into a Multi Line TextBox in a Form, a DBLogger will implement logging to a database table, etc.
Further, each logger class can have a nested logger or chained logger classes, so that a single call to Log() method from the application code can log the message in multiple destinations; example log to a file and a textbox on Form in a single call.
The difficulty I am facing is this:
Usually I log to a running log file (which will contain all log messages required for debugging), a review log file (which will contain only log messages to be reviewed by the user, or which require user action), a Multi Line textbox on screen (which will replicate all log messages to give a progress indication to the user), and another Multi Line textbox (which will log only messages required for user to review).
When I call logger.Log(message), some messages may not apply to a particular log destination. For example, some message may be intended to be logged only in a running log file or progress textbox, but not in the user review textbox, and vice versa.
Since the loggers will be chained so that a single function call can log into all required destinations, how can a particular logger identify that the log message is not intended for it and hence ignore the log message?
My sample log interface is:
public interface Logger
{
public void Log(string msg);
public void Log(string msgType, string msg);
public void InitLogSession();
public void EndLogSession();
public void AddLogger(Logger chainedLogger);
public void RemoveLogger(Logger chainedLogger);
}
public class FileLogger : Logger
{
//implement methods
}
public class TextBoxLogger : Logger
{
//implement methods
}
public class DBLogger : Logger
{
//implement methods
}
EDIT 1:
To be more precise, there could be 4 loggers: 2 file loggers and 2 textbox loggers. A particular message is suppose meant for 1 of the textbox loggers, and 1 of the file loggers; how should my design handle this?
EDIT 2:
Please do not suggest existing logging frameworks. I just want to write it on my own !
EDIT 3:
Ok. I have a design. Please give your feedback and probably fill the gaps.
The revised interface:
public interface Logger
{
public void Log(string msg);
public void Log(string msgType, string msg);
public void Log(int loggerIds, string msg);
public void Log(int loggerIds, string msgType, string msg);
public void InitLogSession();
public void EndLogSession();
public int getLoggerId();
}
public enum LoggerType
{
File,
TextBox
};
public class LoggerFactory
{
public Logger getLogger(LoggerType loggerType)
{
}
}
The LoggerFactory class will be the sole way to instantiate a logger. This class will assign a unique id to each instance of a logger. This unique id will be a power of 2. Example, 1st logger will get id 1, 2nd will get id 2, 3rd will get 4, and 4th will get 8, and so on.
The returned logger object can be typecast to specific class, and further values like filePath, textbox, etc. can be set by the caller, or else I can have multiple methods in LoggerFactory: one for each type of logger, which will accept specific parameters.
So, suppose we have 4 loggers with ids 1,2,4,8.
A particular message which has to be processed by the 1st and 3rd logger (i.e. logger ids 1 and 4) has to be logged using the function:
public void Log(int loggerIds, string msg);
The value to be passed to loggerIds should be "0101". Each logger will check whether its logger id bit is ON. If yes, only then it will log the message.
Now in the function signatures, I have mentioned int type, but which is the specific optimised type for performing bit manipuations and comparisons?
In this approach, there can probably be a limit on the max no. of loggers, but that is fine with me. Please give your feedback.
Note: Currently I am still on .NET 2.0. If possible, suggest solution within .NET 2.0, else fine, I can move to higher versions.
CONS of this design: Each class which needs to log, needs to know about all the available loggers instantiated by the application, and accordingly set up the bit pattern. Any ideas how to have a loosely coupled design?
Why don't you look at (or indeed use) an existing logging framework such as log4net or NLog.
They have the concept of a log level (e.g. trace, info, error etc) as well as being able to filter by the name of the log (which is normally the fully qualified type name which invoked the logging call). You can then map these to one or more 'targets'.
"Please do not suggest existing logging frameworks. I just want to write it on my own !"
Accepted answer: Don't reinvent the wheel! Use this existing logging framework!
facepalm
The best answer, which is my answer, goes something like this.
Use interfaces if you want plug and play functionality. You can make it easily configurable. Here's the high level run down.
Use a config file to indicate what type of logger you want
which implements your logging interface
Use reflection to instantiate the type you pull from
your config file AT RUNTIME.
Pass in your logger interface you just made via constructor injection inside your classes.
You're not reinventing the wheel by designing by interface.
If you make your interface general enough, it is implementation non specific (ideally).
This means if log4net goes to crap, or is no longer supported, you don't have to RIP OUT AND MODIFY ALL YOUR CALLING CODE. This is like hard-wiring a lamp directly into your house to turn it on. For the love of god, please don't do that.
The interface defines the contract by which the components interact, not the implementation.
The only thing I can think of is, look at existing logging frameworks, find the common elements, and write your interface as the intersection of the common features. Obviously there are going to be features that you miss. It depends on how much flexibility you want. You can use Log4net, or the Microsoft Event Viewer logger, or both! No implementation details are re implemented. And it is a MUCH less coupled system than having everything in your code tied to one technology / framework.
As devdigital wrote, these Frameworks usually do this by providing designated methods for Logging like: Warn("..."), Fail("...")...
You could also look for the ILogger interface of the logging facility of the castle project. (maybe try to google the ILogger.cs sourcecode)
If you still adhere to your approach of chained loggers with common interface (for which you you would also have to implement a chaining mechanism) you would have to provide a kind of logging level to your Log() method. This may be just an integer or an enum as well.
Like this:
public interface Logger
{
public void Log(LogLevel level, string msg);
public void Log(LogLevel level, string msgType, string msg);
public void InitLogSession();
public void EndLogSession();
public void AddLogger(Logger chainedLogger);
public void RemoveLogger(Logger chainedLogger);
}
With an logging level enum like this:
public enum LogLevel
{
Info,
Warn,
Debug,
Error,
Fail
}
The loggers to use would then be selected within a chain of responsibility.
I wrote my own logger some time ago. To be honest it was not as good as those available for free and I realized that I was trying to re-invent a wheel that was already round!
I see that you want to write your own code but it might still be an idea to look at open source solutions and perhaps use them or modify them for your own specific needs
I now use TracerX: http://www.codeproject.com/Articles/23424/TracerX-Logger-and-Viewer-for-NET This is an open source project so it easy to modify the source code it you need to. The other loggers mentioned are also good of course.
EDIT
This is based on the accepted answer to my question here: How to pass status information to the GUI in a loosely coupled application So I claim no originality in this. Your log messages are simple at the moment I think
My suggested answer is that you use a message type that can process (e.g.) send itself to different loggers based on either some logic passed to it at run time or by using a factory to create different message types depending on run time conditions.
So
create an abstract message class or interface that has a process method.
create a number of message types inheriting from the abstract class or interface that represent the different types of logging you want to carry out. The process method could determine where to send them.
Consider using a factory to create the message type you need during runtime so you don't need to decide what types you will need in advance
When you generate a log message use the process message to route the message to the loggers you want it to go to
This seem like a good place to use extension methods.
Create your base class, then create the extension methods for it.
BaseLogger(LogMessage).toTextBoxLog().toFileLog().toDatabaseLog().
This way, you always call the BaseLogger and then only the extension methods where needed.
.NET now provides the ILogger interface which can be used with a variety of .NET or 3rd party logging tools through dependency injection.
Using this you can separate your logging functionality in code, from the concrete implementation in your architecture, and can later swap out loggers without significant modification to your business code.
https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-6.0
https://learn.microsoft.com/en-us/dotnet/core/extensions/logging?tabs=command-line
Thanks for looking!
Background
I have an extension method that is used to wrap a given method in a try/catch and I am adding code for logging any caught exceptions:
public static T HandleServerError<T>(this Func<T> func)
{
T result = default(T);
try
{
result = func();
}
catch (Exception ex)
{
//******************************
//Code for logging will go here.
//******************************
ErrorHandlers.ThrowServerErrorException(ex);
}
return result;
}
Here is how the method is called:
var result = new Func<SomeClass.SomeType>(() => SomeClass.SomeMethod(id, name, color, quantity)).HandleServerError();
return result;
As you can see, whatever method I am calling is injected into the extension method and executed inside the try/catch.
We will be using NLog or ELMAH for logging, but that is largely irrelevant to this question.
Problem
If something goes wrong, I need to log as much information about the delegated method as possible since things like "Object reference not set to an instance of an object" is not in itself helpful.
I would like to log the class and name of the method being called as well as the parameters in the method signature along with their values. If possible, I would even like to log which line failed, and finally the actual stack trace.
I am guessing that I need to use reflection for this and maybe catch the binding flags somehow as the injected method executes but I am not entirely sure if that is the best approach or if it is even feasible.
Question
Using C#, how do I get the meta information (i.e. method name, class of origin, parameters, parameter values) about an injected/delegated method?
Thanks in advance.
It seems to me that there is a possibility for you to improve the way you are adding this logging cross-cutting concern to your application.
The main issue here is that although your solution prevents you from making any changes to SomeClass.SomeMethod (or any called method), you still need to make changes to the consuming code. In other words you are breaking the Open/closed principle, which tells us that it must be possible to make these kinds of changes without changing any existing code.
You might think I'm exaggerating, but you probably already have over a hundred calls to HandleServerError in your application, and the number of calls will only be growing. And you'll soon add even more of those 'functional decorators' to the system pretty soon. Did you ever think about doing any authorization checks, method argument validation, instrumentation, or audit trailing? And you must admit that doing new Func<T>(() => someCall).HandleServerError() just feels messy, doesn't it?
You can resolve all these problems, including the problem of your actual question, by introducing the right abstraction to the system.
First step is to promote the given method arguments into a Parameter Object:
public SomeMethodParameters
{
public int Id { get; set; }
public string Name { get; set; }
public Color Color { get; set; }
public decimal Quantity { get; set; }
public decimal Result { get; set; }
}
Instead of passing all the individual arguments into a method, we can pass them all together as one single object. What's the use of that, you may say? Read on.
Second step is to introduce a generic interface to hide the actual logic of the SomeClass.SomeMethod (or in fact any method) behind:
public interface IMethodHandler<TParameter>
{
void Handle(TParameter parameter);
}
For each (business) operation in the system, you can write an IMethodHandler<TParameter> implementation. In your case you could simply create an implementation that wraps the call to SomeClass.SomeMethod, like this:
public class SomeMethodHandler
: IMethodHandler<SomeMethodParameters>
{
void Handle(SomeMethodParameters parameter)
{
parameter.Result = SomeClass.SomeMethod(
parameter.id,
parameter.Name,
parameter.Color,
parameter.Quantity);
}
}
It might look a bit silly to do things like this, but it allows you to implement this design quickly, and move the logic of the static SomeClass.SomeMethod inside of the SomeMethodHandler.
Third step is let consumers depend on a IMethodHandler<SomeMethodParameters> interface, instead of letting them depend on some static method in the system (in your case again the SomeClass.SomeMethod). Think for a moment what the benefits are of depending on such abstraction.
One interesting result of this is that it makes it much easier to unit test the consumer. But perhaps you're not interested in unit testing. But you are interested in loosely coupling. When consumers depend on such abstraction instead of a real implementation (especially static methods), you can do all kinds of crazy things, such as adding cross-cutting concerns such as logging.
A nice way to do this is to wrap IMethodHandler<T> implementations with a decorator. Here is a decorator for your use case:
public class LoggingMethodHandlerDecorator<T>
: IMethodHandler<T>
{
private readonly IMethodHandler<T> handler;
public LoggingMethodHandlerDecorator(
IMethodHandler<T> handler)
{
this.handler = handler;
}
public void Handle(T parameters)
{
try
{
this.handler.Handle(parameters);
}
catch (Exception ex)
{
//******************************
//Code for logging will go here.
//******************************
ErrorHandlers.ThrowServerErrorException(ex);
throw;
}
}
}
See how the Handle method of this decorator contains the code of your original HandleServerError<T> method? It's in fact not that much different from what you were already doing, since the HandleServerError 'decorated' (or 'extended') the behavior of the original method with new behavior. But instead of using method calls now, we're using objects.
The nice thing about all this is, is that this single generic LoggingMethodHandlerDecorator<T> can be wrapped around every single IMethodHandler<T> implementation and can be used by every consumer. This way we can add cross-cutting concerns such as logging, etc, without both the consumer and the method to know about this. This is the Open/closed principle.
But there is something else really nice about this. Your initial question was about how to get the information about the method name and the parameters. Well, all this information is easily available now, because we've wrapped all arguments in an object instead of calling some custom method wrapped inside a Func delegate. We could implement the catch clause like this:
string messageInfo = string.Format("<{0}>{1}</{0}>",
parameters.GetType().Name, string.Join("",
from property in parameters.GetType().GetProperties()
where property.CanRead
select string.Format("<{0}>{1}</{0}>",
property.Name, property.GetValue(parameters, null)));
This serializes the name of the TParameter object with its values to an XML format. Or you can of course use .NET’s XmlSerializer to serialize the object to XML or use any other serialization you need. All the information if available in the metadata, which is quite nice. When you give the parameter object a good and unique name, it allows you to identify it in the log file right away. And together with the actual parameters and perhaps some context information (such as datetime, current user, etc) you will have all the information you need to fix a bug.
There is one difference between this LoggingMethodHandlerDecorator<T> and your original HandleServerError<T>, and that is the last throw statement. Your implementation implements some sort of ON ERROR RESUME NEXT which might not be the best thing to do. Is it actually safe to continue (and return the default value) when the method failed? In my experience it usually isn't, and continuing at this point, might make the developer writing the consuming class think that everything works as expected, or might even make the user of the application think that everything worked out as expected (that his changes were saved for instance, while in fact they weren't). There's usually not much you can do about this, and wrapping everything in catch statements only makes it worse, although I can imagine that you want to log this information. Don’t be fooled by user requirements such as “the application must always work” or “we don’t want to see any error pages”. Implementing those requirements by suppressing all errors will not help and will not fix the root cause. But nonetheless, if you really need to catch-and-continue, just remove the throw statement`, and you'll be back at the original behavior.
If you want to read more about this way of designing your system: start here.
You can simply access its Method and Target properties as it's basically any other delegate.
Just use func.Method and func.Target.
Is there a way to handle an exception thrown by the constructor of a WCF service, when that constructor takes in a dependency, and it is the instantiation of the dependency by the IoC container (AutoFac in this case) that causes the exception?
Consider a WCF service with the following constructor:
public InformationService(IInformationRetriever informationRetriever)
{
_informationRetriever = informationRetriever;
}
//... the service later makes use of the injected InformationRetriever
The service uses AutoFac WcfIntegration and the AutofacWebServiceHostFactory (this happens to be a RESTful service).
Dependencies are registered in the global.asax.cs of the service, i.e.:
builder.RegisterType<InformationRetriever>()
.As<IInformationRetriever>()
Now the InformationRetriever implementation performs some checks in its constructor to ensure everything is in place for it to be able to do its job. When it discovers a problem in this phase, it throws an exception.
However, I do not want the caller of the service to receive the AutoFac exception:
An exception was thrown while invoking the constructor ... on type InformationRetriever
Effectively I am trying to test:
Given the InformationService is running
When I call the GetSomeInformation() method
And The InformationRetriever cannot be instantiated
Then I want to return a friendly error message
And Log the actual exception
Is this a problem with my design, or is there a known pattern to overcome or prevent this problem?
I have hunted around and could not find any information on this type of problem.
Objects written in the DI style generally pass through two separate phases: composition and execution. The composition phase is where you wire up dependencies and do things like throw argument exceptions. You generally want to keep this phase free of meaningful behavior, as that allows you to surface errors in the configuration of your system. The second phase, execution, is where you use the output of the first phase (dependencies) to do your work.
Separating these two phases removes a lot of ambiguity and complexity. As an example, you don't try to mow your lawn while gassing up your lawnmower; that causes both activities to become more complex (and dangerous!)
In this case, InformationRetriever is conflating the composition and execution phases by performing meaningful work in its constructor. This mixing is causing exactly the issue you are trying to avoid: a meaningful business exception being wrapped in a composition exception. It is also unclear how to handle the exception, since the top-level invoker is Autofac and not the component which is actually asking InformationRetriever to do work.
I suggest striving to do the validation when calling on InformationRetriever; this removes the Autofac exception and allows InformationService to handle the exceptional situation without any trickery.
One potential downside of this approach is that the validation will happen on every call to InformationRetriever, rather than once in the constructor. You have two choices: 1) Let it happen every time, to be absolutely sure the work is valid to do, or 2) Keep track of whether you've done the check and only do it if you haven't before.
If you choose #2, you can keep InformationRetriever clean by using a decorator to wrap it in a validating version of the same interface:
public class ValidatingInformationRetriever : IInformationRetriever
{
private readonly IInformationRetriever _baseRetriever;
private bool _validated;
public ValidatingInformationRetriever(IInformationRetriever baseRetriever)
{
_baseRetriever = baseRetriever;
}
public void Foo()
{
if(!_validated)
{
Validate();
_validated = true;
}
_baseRetriever.Foo();
}
private void Validate()
{
// ...
}
}
You can register it using Autofac's decorator support like so:
builder
.RegisterType<InformationRetriever>()
.Named<IInformationRetriever>("base");
builder.RegisterDecorator<IInformationRetriever>(
(c, inner) => new ValidatingInformationRetriever(inner),
fromKey: "base");
I'm not a big fan of constructors throwing exceptions for reasons other than bad arguments. I'd probably model my types a different way. But here's some ideas. At first I thought about doing something like this:
builder
.Register(c => {
try
{
return new InformationRetriever();
}
catch (Exception)
{
return new FailoverInformationRetreiver();
}})
.As<IInformationRetriever>();
... where FailoverInformationRetreiver throws exceptions on member access. Another idea might be to do:
public InformationService(Lazy<IInformationRetriever> informationRetriever)
{
_informationRetriever = informationRetriever;
}
and try/catch around usages inside InformationService. Another option you could go with if the availability of InformationRetriever is known at app startup:
// inside your container builder:
if (InformationRetreiver.IsAvailable())
builder.RegisterType<InformationRetriever>()
.As<IInformationRetriever>()
// inside InformationService, make the dependency optional
public InformationService(IInformationRetriever informationRetriever = null)
{
_informationRetriever = informationRetriever;
}
Do any of those ideas help?
Many open source projects use a Configuration class and lambda's to clarify configuring a complex object. Take Mass Transit for example. A simple configuration would be like so.
Bus.Initialize(sbc =>
{
sbc.UseMsmq();
sbc.VerifyMsmqConfiguration();
sbc.VerifyMsDtcConfiguration();
sbc.UseMulticastSubscriptionClient();
sbc.ReceiveFrom("msmq://localhost/test");
});
When you hover over Initialize in Visual Studio it says the parameter for the method call is Action<ServiceBusConfigurator>. I was wondering if anyone could show a simple example of how to use this pattern on a class. I don't even know what to call this type of pattern and my "GoogleFu" is not working as of yet. In this particular case I realize the method is operating on a singleton pattern. But I am ok with it being an instance method on a class.
An Action<ServiceBusConfigurator> is a method which accepts a single parameter of type ServiceBusConfigurator, does an "action" operating on that instance, and returns nothing (void).
.NET BCL (starting from 3.5) comes with predefined generic delegate signatures: Action<T>, Action<T1, T2> (etc.) for methods which don't return a value, and Func<Tresult>, Func<T, Tresult> (etc.) for methods accepting zero of more parameters and returning a single result instance of type Tresult.
When you create a method which accepts a delegate, you allow callers of your method to pass more than just data parameters - your method actually delegates a part of responsibility to external code. In your case, Bus.Initialize creates an instance of ServiceBusConfigurator (sbc), and then calls the specified action with the sbc instance as the parameter.
This basically lets your method control the lifetime of the configuration class instance. It is up to the caller to fill in the details, but actual instance is provided by your class:
// this is not actual mass transit source code
public class BusCreator
{
public static IBus Initialize(Action<IConfiguration> action)
{
// create the config instance here
IConfiguration config = CreateDefaultConfig();
// let callers modify it
action(config);
// use the final version to build the result
return config.Build()
}
}
The benefit is that your built instance (the imaginary IBus in this case) cannot be modified further. Configuration instance is only created shortly, passed to an external method, and then used to create an immutable final object:
IBus result = BusCreator.Configure(cfg => cfg.BusType = BusType.MSMQ);
Two things to note in the line above:
The code inside the anonymous method is wrapped inside a delegate passed to the method. It is not executed until the Configure method actually calls it.
The cfg parameter is created by the Configure method and passed to the lambda. After the method returns, this object doesn't exist anymore (or is wrapped inside the resulting object).
To add to what others have said, this is an "entry point" into a fluent interface. The approach of using an Action callback to achieve this is a nice way of isolating the fluent interface in a way that is at the same time very extensible.
This resembles the Unit of Work pattern, which is commonly associated with transactions and persistence scenarios, but seems to fit to your example.
The following citation was taken from Martin Fowler's definition of the pattern:
A Unit of Work keeps track of everything you do during a business transaction that can affect the database. When you're done, it figures out everything that needs to be done to alter the database as a result of your work.
If you change business transaction for initialization and database for configuration, you can get a better idea of what's going on. Additionally, think of the action (the delegate in case) as an atomic operation: either the new configuration is fully applied, or the current configuration is kept unchanged.
As noted, the action itself doesn't explicitly touch Bus. Even without knowing the details of the involved classes, we can guess how this interaction occurs:
The ServiceBusConfigurator may be read after the action is invoked, before the Initializa() method returns (most likely);
Bus might implement/extend ServiceBusConfigurator, so that Initialize() can pass this as the argument of the invoked action (less likely);
Bus may be static and visible to ServiceBusConfigurator, which in turn can change the configuration properties of Bus upon a call to ReceiveFrom() (extremely convoluted and, I hope, very unlikely).
These are some strategies that just popped up my mind right now. Many others may be suggested!
Assuming I have a list of financial transactions, I have a need to execute a list of validation rules against those transactions. An example would be I have a transaction to purchase a product, however first I need to validate that the account in the transaction has enough available funds, that the product is not sold out etc. As a result of these many rules the transaction will be marked as rejected, as well as an error code should be specified.
Naturally I am thinking towards fronting my rules with an interface, allowing the executing code to roll through the rules executing each one until the first one rejects the transaction.
Each rule will require to be configured with parameters (ex. ValidateMinimumBalance will need to know that minimumBalance = 30). The result of a rule executing can be as simple as settings the rejection code on the transaction object, and the error code; or it can be as complicated as automatically modifying multiple other properties of the transaction.
My basic understanding of design patterns points to me either Strategy or Command patterns, but I am not entirely sure which one is better suited for this scenario.
Command Pattern
Each command will implement some sort of IValidate interface
The constructor of the command will take an instance of the transaction as the receiver in order to be able to read/validate the transaction as well as modify aspects of it. The constructor will also take an array of key/value pairs as parameters for the validation logic.
When I try to picture how the Strategy Pattern fits this scenario it looks very similar. In most examples the strategy is a simple object with a single method, however in my case the strategy will need a reference to the transaction as well as validation parameters.
Strategy is more used to swap out algorithms, its not really used for chaining validations. If you are going to have a pattern where you have one validation per type then you could use the strategy, if you are finding your having to use multiple validators, or the need to reuse validators. I think you are going to have to either find a new way to do it (aka COR) or within your strategy use the COR.
I actually would answer other. I think a combination chain of responsibility pattern and the composite pattern, or decorator for validators is much more suited for your needs.
Typing up an example implementation now.. but at a high level
Chain of Responsiblity
The design would revolve around something like:
abstract class Handler
{
protected Handler next;
public Handler(Handler h){
this.next = h;
}
public abstract bool Validate(Request request);
public abstract void Handle(Request request);
}
class CoreLogic: Handler
{
public CoreLogic(Handler handle) : base(handle){
}
public override void Validate(Request request){
return True
}
public override void Handle(Request request){
if(this.Validate(request)){
if(next!= null){
next.Handle(request);
}
}
}
}
class ValidBalance: Handler
{
public ValidBalance(Handler handle) : base(handle){
}
public override void Validate(Request request){
return True
}
public override void Handle(Request request){
if(this.Validate(request)){
if(next!= null){
next.Handle(request);
}
}
}
}
class MainApp
{
static void Main(){
Handler h = new ValidateBalance( new CoreLogic(null));
h.Handle(new Request());
}
}
Other useful links:
Chain of Responsiblity wikipedia
A Strategy would be something use to 'parameterize' a Command (telling it how parts of the operation should be executed).
When I try to picture how the Strategy Pattern fits this scenario it looks very similar.
Similar? It should look identical.
The distinction is one of how the context and delegation works. In principle a Command is the "active" agent. A Strategy is injected into some active agent. That distinction is pretty subtle.
It barely changes the design. What does change is the expectation.
Command objects (more-or-less) stand alone. They're built to do their work, and then they can vanish. No one cares about them any more. Perhaps they also use the Memento pattern, and have some future life, but perhaps not.
Strategy objects (more-or-less) live with the object into which they're injected. A Strategy would be part of some larger object, and could be replaced by a different implementation without breaking or changing anything else.
But the essential interface is largely the same.
In most examples the strategy is a simple object with a single method,
Those are poor examples.
however in my case the strategy will need a reference to the transaction as well as validation parameters.
Not unusual. Nothing wrong with it.
but I am not entirely sure which one
is better suited for this scenario
Neither :)
I strongly recommend to look at Interpreter. Actually your validator rules are just predicates formulated for your transactions. It's quite possible that soon you will need to combine these rules with AND, OR, NOT, etc.