Considering this class with business logic:
public static class OrderShipper
{
public static void ShipOrder(Order order) {
AuthorizationHelper.AuthorizedUser();
using (new PerformanceProfiler()) {
OperationRetryHelper.HandleWithRetries(() => ShipOrderInTransaction(order));
}
}
private static void ShipOrderInTransaction(Order order) {
using (var transaction = new TransactionHelper()) {
ShipOrderInternal(order);
transaction.Commit();
}
}
private static void ShipOrderInternal(order) {
// lots of business logic
}
}
The class contains some business logic, and executes some crosscutting concerns as well. Although there is no doubt about that this class violates the Open/Closed Principle, does this class violate the Single Responsibility Principle?
I'm in doubt, since the class itself is not responsible for authorizing the user, for profiling the performance and for handling the transaction.
There is no question about this that this is poor design, since the class is still (statically) depending on those crosscutting concerns, but still: Is it violating the SRP. If so, why is this?
It's a good question, the title is slightly misleading (it's unlikely you can build an application without "calling into other code"). Remember that the SOLID principles are more guidelines than absolute rules that must be followed; if you take SRP to its logical conclusion, you will end up with one method per class. The way to minimise the impact of cross-cutting concerns is to create a facade that is as easy as possible to use. In your examples you have done this well - each crosscutting concern only uses one line.
Another way to achieve this is through AOP, which is possible in C# by using PostSharp or through IoC interception
There is nothing wrong with a class method coordinating some other classes activities, that's not breaking the SRP. You will break it if the logic of those classes was part of OrderShipper.
I'm not sure what PerformanceProfiler does, but is the only component that looks weird in there.
Let's make it more visible by converting the class to a command:
// Command pattern
public class ShipOrder
{
ITransactionFactory _transactionFactory;
public OrderShipper(ITransactionFactory factory)
{
if (factory == null) throw new ArgumentNullException("factory");
_transactionFactory = factory;
}
[PrincipalPermission(Roles = "User")]
public void Execute(Order order)
{
if (order == null) throw new ArgumentNullException("order");
using (new PerformanceProfiler())
{
HandleWithRetries(() => ShipOrderInTransaction(order));
}
}
private void ShipOrderInTransaction(Order order)
{
using (var transaction = _transactionFactory.Create())
{
ShipOrderInternal(order);
transaction.Commit();
}
}
protected void ShipOrderInternal(order)
{
// bussiness logic which is divided into different protected methods.
}
}
Hence you can call it using:
var cmd = new ShipOrder(transactionFactory);
cmd.Execute(order);
That's pretty solid.
Yes, it does break SRP, at least according to the class name.
the class itself is not responsible for authorizing the user, for profiling the performance and for handling the transaction.
You are answering yourself,it should contain only the shipping order logic. And it shouldn't be static (why is it static?!).
The solution provided by #jgauffin is a posibility, although I'm not entirely convinced that the OrderShipper should know about a transaction or it should be just a part of one. ALso, the performance profiler, IMO, has no place in this class. But having only this information I can't suggest a solution. The profiling though is a crosscuting concern and it might be better to be handled outside of this class, perhaps with an attribute.
Btw, using a message driven approach (as suggested by jgauffin) it should allow the infrastructure to provide profiling and reliability (HandleWithRetries) support
Related
Wondering if someone can throw some guidance my way. My standard application setup has always been nTier application (Presentation, Business, Data and usually a Common). I've avoided setting up and IoC container (used them in other people's apps for ages, just not set them up) for as long as I can but finally having to take the plunge.
My understanding of IoC allows dependency injection which in turn makes unit testing possible (well a lot easier) so in my head I'd want to at least perform unit tests on the Business Layer....but every example of setting up IoC like StructureMap makes the IoC on the Presentation layer. So...what I'm asking is what is the 'best practice' for nTier App with an IoC.
Thanks.
The primary benefit of DI is not unit testing (although that is certainly a benefit). The primary benefit is loose-coupling. An application that is "testable" is not necessarily loosely-coupled.
However, loose-coupling brings a lot more to the table than just testability.
Loose Coupling Benefits
Late Binding (services can be swapped with other services, often without changing existing code)
Extensibility (code can be extended, often without changing existing code)
Parallel development (abstract contracts are defined that multiple developers can adhere to)
Maintainability (classes with clearly defined responsibilities are easier to maintain)
Testability (classes are easier to test).
IMHO, when combining DI with software patterns, extensibility is definitely the main benefit. Consider the following types:
public interface IWriter
{
void WriteSomething();
}
public interface ISomeService
{
void Write();
}
You could extend a service by using a Decorator Pattern:
public class NullWriter : IWriter
{
public void WriteSomething()
{
// Do nothing - this is a "null object pattern".
}
}
public class HelloWriter : IWriter
{
public readonly IWriter innerWriter;
public HelloWriter(IWriter innerWriter)
{
if (innerWriter == null)
throw new ArgumentNullException("innerWriter");
this.innerWriter = innerWriter;
}
public void WriteSomething()
{
this.innerWriter.WriteSomething();
Console.WriteLine("Hello.");
}
}
public class GoodbyeWriter : IWriter
{
public readonly IWriter innerWriter;
public GoodbyeWriter(IWriter innerWriter)
{
if (innerWriter == null)
throw new ArgumentNullException("innerWriter");
this.innerWriter = innerWriter;
}
public void WriteSomething()
{
this.innerWriter.WriteSomething();
Console.WriteLine("Goodbye.");
}
}
public class SomeService : ISomeService
{
private readonly IWriter writer;
public SomeService(IWriter writer)
{
if (writer == null)
throw new ArgumentNullException("writer");
}
public void Write()
{
this.writer.WriteSomething();
}
}
And the above would be wired up like:
// Composition Root
var nullWriter = new NullWriter();
var goodbyeWriter = new GoodbyeWriter(nullWriter);
var helloWriter = new HelloWriter(goodbyeWriter);
var service = new SomeService(helloWriter);
// End Composition Root
// Execute
service.Write();
//Writes:
//Hello.
//Goodbye.
Now, that the scenario is set up, you can extend what SomeService does without altering any of the existing types. The only part of the application that needs to change is the composition root.
public class HowAreYouWriter : IWriter
{
public readonly IWriter innerWriter;
public HowAreYouWriter(IWriter innerWriter)
{
if (innerWriter == null)
throw new ArgumentNullException("innerWriter");
this.innerWriter = innerWriter;
}
public void WriteSomething()
{
this.innerWriter.WriteSomething();
Console.WriteLine("How are you?");
}
}
// Composition Root
var nullWriter = new NullWriter();
var goodbyeWriter = new GoodbyeWriter(nullWriter);
var howAreYouWriter = new HowAreYouWriter(goodbyeWriter);
var helloWriter = new HelloWriter(howAreYouWriter);
var service = new SomeService(helloWriter);
// End Composition Root
// Execute
service.Write();
//Writes:
//Hello.
//How are you?
//Goodbye.
Convention over Configuration
One additional (often overlooked) benefit of DI is Convention over Configuration. When combining constructor injection with a DI container, many of them provide the ability to map ISomeService to SomeService automatically. Some containers (like StructureMap) also have the ability to build your own conventions.
The benefit isn't obvious because it really doesn't start to pay off until you are registering dozens of types using the convention. However, you can considerably reduce the amount of code it takes to compose your application if you use them.
N-Tier App
For a single application, there is normally a single composition root as close to the entry point of the application as possible. In MVC, that would be within the HttpApplication.Start method.
However, this may vary depending on whether you consider the layers of the application design to be DI Friendly Libraries or DI Friendly Frameworks and whether you consider a piece of the application as being a "plug in" that you add to after it is built (basically, making a composition root that can load dynamic dependencies).
There are essentially 3 approaches that are commonly followed to solve this issue:
Make all of the types public and compose them in the same project that contains your presentation layer.
Put a composition root into each layer and make the public API of each layer use DI internally. Then compose the public API of the layer using the main project. In general, you will also need to make a public Compose method on each layer that is called from the main Compose method.
Make a separate composition root project to compose all of the pieces together.
I have even seen some people recommend to put all of your "layers" into a single project, and if you don't intend to use these pieces individually, there isn't that much of a downside. The NuGet Gallery is one such project that is built this way.
IMHO, making everything public and putting the composition root in the main application is usually the best option for a single multi-layer application whose parts aren't intended to be used with another application.
Final Word
If you are serious about learning DI, pick up the book Dependency Injection in .NET by Mark Seemann. You might not think of DI as being a big enough area of development to study on its own, but this book really does provide many benefits that extend beyond just DI, such as SOLID principles.
I am using FluentValidation to validate my service operations. My code looks like:
using FluentValidation;
IUserService
{
void Add(User user);
}
UserService : IUserService
{
public void Add(User user)
{
new UserValidator().ValidateAndThrow(user);
userRepository.Save(user);
}
}
UserValidator implements FluentValidation.AbstractValidator.
DDD says that domain layer have to be technology independent.
What I am doing is using a validation framework instead of custom exceptions.
It's a bad idea to put validation framework in the domain layer?
Just like the repository abstraction?
Well, I see a few problems with your design even if you shield your domain from the framework by declaring an IUserValidator interface.
At first, it seems like if that would lead to the same abstraction strategy as for the Repository and other infrastructure concerns, but there's a huge difference in my opinion.
When using repository.save(...), you actually do not care at all of the implementation from the domain perspective, because how to persist things is not a domain concern.
However, invariant enforcement is a domain concern and you shouldn't have to dig into infrastructure details (the UserValidtor can now be seen as such) to see what they consist of and that's basically what you will end up doing if you do down that path since the rules would be expressed in the framework terms and would live outside the domain.
Why would it live outside?
domain -> IUserRepository
infrastructure -> HibernateUserRepository
domain -> IUserValidator
infrastructure -> FluentUserValidator
Always-valid entities
Perhaps there's a more fundamental issue with your design and that you wouldn't even be asking that question if you adhered to that school of though: always-valid entities.
From that point of view, invariant enforcement is the responsibility of the domain entity itself and therefore shouldn't even be able to exist without being valid. Therefore, invariant rules are simply expressed as contracts and exceptions are thrown when these are violated.
The reasoning behind this is that a lot of bugs comes from the fact that objects are in a state they should never have been. To expose an example I've read from Greg Young:
Let's propose we now have a SendUserCreationEmailService that takes a
UserProfile ... how can we rationalize in that service that Name is
not null? Do we check it again? Or more likely ... you just don't
bother to check and "hope for the best" you hope that someone bothered
to validate it before sending it to you. Of course using TDD one of
the first tests we should be writing is that if I send a customer with
a null name that it should raise an error. But once we start writing
these kinds of tests over and over again we realize ... "wait if we
never allowed name to become null we wouldn't have all of these tests" - Greg Young commenting on http://jeffreypalermo.com/blog/the-fallacy-of-the-always-valid-entity/
Now don't get me wrong, obviously you cannot enforce all validation rules that way, since some rules are specific to certain business operations which prohibits that approach (e.g. saving draft copies of an entity), but these rules aren't to be viewed the same way as invariant enforcement, which are rules that applies in every scenarios (e.g. a customer must have a name).
Applying the always-valid principle to your code
If we now look at your code and try to apply the always-valid approach, we clearly see that the UserValidator object doesn't have it's place.
UserService : IUserService
{
public void Add(User user)
{
//We couldn't even make it that far with an invalid User
new UserValidator().ValidateAndThrow(user);
userRepository.Save(user);
}
}
Therefore, there's no place for FluentValidation in the domain at this point. If you still aren't convinced, ask yourself how you would integrate value objects? Will you have a UsernameValidator to validate a Username value object everytime it's instanciated? Clearly, that doesn't make any sense and the use of value objects would be quite hard to integrate with the non always-valid approach.
How do we report all errors back when exceptions are thrown then?
That's actually something I struggled with and I've been asking that myself for a while (and I'm still not entirely convinced about what I'll be saying).
Basically, what I've come to understand is that it isn't the job of the domain to collect and return errors, that's a UI concern. If invalid data make it's way up to the domain, it just throws on you.
Therefore, frameworks like FluentValidation will find their natural home in the UI and will be validating view models rather than domain entities.
I know, that seems hard to accept that there will be some level of duplication, but this is mainly because you are probably a full-stack developer like me that deals with the UI and the domain when in fact those can and should probably be viewed as entirely different projects. Also, just like the view model and the domain model, view model validation and domain validation might be similar but serves a different purpose.
Also, if you're still concerned about being DRY, someone once told me that code reuse is also "coupling" and I think that fact is particularly important here.
Dealing with deferred validation in the domain
I will not re-explain those here, but there are various approaches to deal with deferred validations in the domain such as the Specification pattern and the Deferred Validation approach described by Ward Cunningham in his Checks pattern language. If you have the Implementing Domain-Driven Design book by Vaughn Vernon, you can also read from pages 208-215.
It's always a question of trade-offs
Validation is an extremely hard subject and the proof is that as of today people still don't agree on how it should be done. There are so many factors, but at the end what you want is a solution that is practical, maintainable and expressive. You cannot always be a purist and must accept the fact that some rules will be broken (e.g you might have to leak some unobtrusive persistence details in an entity in order to use your ORM of choice).
Therefore, if you think that you can live with the fact that some FluentValidation details makes it to your domain and that it's more practical like that, well I can't really tell if it will do more harm than good in the long run but I wouldn't.
Answer on your question depends what kind of validation you want put into validator class. Validation can be part of domain model and in your case you've implemented it with FluentValidation and I not see any problems with that. The key thing of domain model - you can use your domain model everywhere, for example if your project contains web part, api, integration with other subsystems. Each module reference to your domain model and works same for all.
If I understood it correctly, I see no problem whatsoever in doing this as long as it is abstracted as an infrastructure concern just like your repo abstracts the persistence technology.
As an example, I have created in for my projects an IObjectValidator that returns validators by object type, and a static implementation of it, so that I'm not coupled to the technology itself.
public interface IObjectValidator
{
void Validate<T>(T instance, params string[] ruleSet);
Task ValidateAsync<T>(T instance, params string[] ruleSet);
}
And then I implemented it with Fluent Validation just like this:
public class FluentValidationObjectValidator : IObjectValidator
{
private readonly IDependencyResolver dependencyResolver;
public FluentValidationObjectValidator(IDependencyResolver dependencyResolver)
{
this.dependencyResolver = dependencyResolver;
}
public void Validate<T>(T instance, params string[] ruleSet)
{
var validator = this.dependencyResolver
.Resolve<IValidator<T>>();
var result = ruleSet.Length == 0
? validator.Validate(instance)
: validator.Validate(instance, ruleSet: ruleSet.Join());
if(!result.IsValid)
throw new ValidationException(MapValidationFailures(result.Errors));
}
public async Task ValidateAsync<T>(T instance, params string[] ruleSet)
{
var validator = this.dependencyResolver
.Resolve<IValidator<T>>();
var result = ruleSet.Length == 0
? await validator.ValidateAsync(instance)
: await validator.ValidateAsync(instance, ruleSet: ruleSet.Join());
if(!result.IsValid)
throw new ValidationException(MapValidationFailures(result.Errors));
}
private static List<ValidationFailure> MapValidationFailures(IEnumerable<FluentValidationResults.ValidationFailure> failures)
{
return failures
.Select(failure =>
new ValidationFailure(
failure.PropertyName,
failure.ErrorMessage,
failure.AttemptedValue,
failure.CustomState))
.ToList();
}
}
Please note that I have also abstracted my IOC container with an IDependencyResolver so that I can use whatever implementation I want. (using Autofac at the moment).
So here is some bonus code for autofac ;)
public class FluentValidationModule : Module
{
protected override void Load(ContainerBuilder builder)
{
// registers type validators
builder.RegisterGenerics(typeof(IValidator<>));
// registers the Object Validator and configures the Ambient Singleton container
builder
.Register(context =>
SystemValidator.SetFactory(() => new FluentValidationObjectValidator(context.Resolve<IDependencyResolver>())))
.As<IObjectValidator>()
.InstancePerLifetimeScope()
.AutoActivate();
}
}
The code could be missing some of my helpers and extensions, but I believe it would be more than enough to get you going.
I hope I have helped :)
EDIT:
Since some fellow coders prefer not to use the "service locator anti pattern", here is a very simple example on how to remove it and still be happy :)
The code provides a dictionary property that should be filled with all your validators by Type.
public class SimpleFluentValidationObjectValidator : IObjectValidator
{
public SimpleFluentValidationObjectValidator()
{
this.Validators = new Dictionary<Type, IValidator>();
}
public Dictionary<Type, IValidator> Validators { get; private set; }
public void Validate<T>(T instance, params string[] ruleSet)
{
var validator = this.Validators[typeof(T)];
if(ruleSet.Length > 0) // no ruleset option for this example
throw new NotImplementedException();
var result = validator.Validate(instance);
if(!result.IsValid)
throw new ValidationException(MapValidationFailures(result.Errors));
}
public Task ValidateAsync<T>(T instance, params string[] ruleSet)
{
throw new NotImplementedException();
}
private static List<ValidationFailure> MapValidationFailures(IEnumerable<FluentValidationResults.ValidationFailure> failures)
{
return failures
.Select(failure =>
new ValidationFailure(
failure.PropertyName,
failure.ErrorMessage,
failure.AttemptedValue,
failure.CustomState))
.ToList();
}
}
I have some debugging functions that I would like to refactor, but seeing as they are debugging functions, it seems like they would be less likely to follow proper design. They pretty much reach into the depths of the app to mess with things.
The main form of my app has a menu containing the debug functions, and I catch the events in the form code. Currently, the methods ask for a particular object in the application, if it's not null, and then mess with it. I'm trying to refactor so that I can remove the reference to this object everywhere, and use an interface for it instead (the interface is shared by many other objects which have no relation to the debugging features.)
As a simplified example, imagine I have this logic code:
public class Logic
{
public SpecificState SpecificState { get; private set; }
public IGenericState GenericState { get; private set; }
}
And this form code:
private void DebugMethod_Click(object sender, EventArgs e)
{
if (myLogic.SpecificState != null)
{
myLogic.SpecificState.MessWithStuff();
}
}
So I'm trying to get rid of the SpecificState reference. It's been eradicated from everywhere else in the app, but I can't think of how to rewrite the debug functions. Should they move their implementation into the Logic class? If so, what then? It would be a complete waste to put the many MessWithStuff methods into IGenericState as the other classes would all have empty implementations.
edit
Over the course of the application's life, many IGenericState instances come and go. It's a DFA / strategy pattern kind of thing. But only one implementation has debug functionality.
Aside: Is there another term for "debug" in this context, referring to test-only features? "Debug" usually just refers to the process of fixing things, so it's hard to search for this stuff.
Create a separate interface to hold the debug functions, such as:
public interface IDebugState
{
void ToggleDebugMode(bool enabled); // Or whatever your debug can do
}
You then have two choices, you can either inject IDebugState the same way you inject IGenericState, as in:
public class Logic
{
public IGenericState GenericState { get; private set; }
public IDebugState DebugState { get; private set; }
}
Or, if you're looking for a quicker solution, you can simply do an interface test in your debug-sensitive methods:
private void DebugMethod_Click(object sender, EventArgs e)
{
var debugState = myLogic.GenericState as IDebugState;
if (debugState != null)
debugState.ToggleDebugMode(true);
}
This conforms just fine with DI principles because you're not actually creating any dependency here, just testing to see if you already have one - and you're still relying on abstractions over concretions.
Internally, of course, you still have your SpecificState implementing both IGenericState and IDebugState, so there's only ever one instance - but that's up to your IoC container, none of your dependent classes need know about it.
I'd highly recommend reading Ninject's walkthrough of dependency injection (be sure to read through the entire tutorial). I know this may seem like a strange recommendation given your question; however, I think this will save you a lot of time in the long run and keep your code cleaner.
Your debug code seems to depend on SpecificState; therefore, I would expect that your debug menu items would ask the DI container for their dependencies, or a provider that can return the dependency or null. If you're already working on refactoring to include DI, then providing your debug menu items with the proper internal bits of your application as dependencies (via the DI container) seems to be an appropriate way to achieve that without breaking solid design principles. So, for instance:
public sealed class DebugMenuItem : ToolStripMenuItem
{
private SpecificStateProvider _prov;
public DebugMenuItem(SpecificStateProvider prov) : base("Debug Item")
{
_prov = prov;
}
// other stuff here
protected override void OnClick(EventArgs e)
{
base.OnClick(e);
SpecificState state = _prov.GetState();
if(state != null)
state.MessWithStuff();
}
}
This assumes that an instance of SpecificState isn't always available, and needs to be accessed through a provider that may return null. By the way, this technique does have the added benefit of fewer event handlers in your form.
As an aside, I'd recommend against violating design principles for the sake of debugging, and have your debug "muck with stuff" methods interact with your internal classes the same way any other piece of code must - by its interface "contract". You'll save yourself a headache =)
I'd be inclined to look at dependency injection and decorators for relatively large apps, as FMM has suggested, but for smaller apps you could make a relatively easy extension to your existing code.
I assume that you push an instance of Logic down to the parts of your app somehow - either though static classes or fields or by passing into the constructor.
I would then extend Logic with this interface:
public interface ILogicDebugger
{
IDisposable PublishDebugger<T>(T debugger);
T GetFirstOrDefaultDebugger<T>();
IEnumerable<T> GetAllDebuggers<T>();
void CallDebuggers<T>(Action<T> call);
}
Then deep down inside your code some class that you want to debug would call this code:
var subscription =
logic.PublishDebugger(new MessWithStuffHere(/* with params */));
Now in your top-level code you can call something like this:
var debugger = logic.GetFirstOrDefaultDebugger<MessWithStuffHere>();
if (debugger != null)
{
debugger.Execute();
}
A shorter way to call methods on your debug class would be to use CallDebuggers like this:
logic.CallDebuggers<MessWithStuffHere>(x => x.Execute());
Back, deep down in your code, when your class that you're debugging is about to go out of scope, you would call this code to remove its debugger:
subscription.Dispose();
Does that work for you?
Right now I’m working on a very big banking solution developed in VB6. The application is massively form-based and lacks a layered architecture (all the code for data access, business logic and form manipulation is in the single form class). My job is now to refactor this code. I'm writing a proper business logic layer and data access layer in C# and the form will remain in VB.
Here are code snippets:
public class DistrictDAO
{
public string Id{get;set;}
public string DistrictName { get; set; }
public string CountryId { get; set; }
public DateTime SetDate { get; set; }
public string UserName { get; set; }
public char StatusFlag { get; set; }
}
District Entity class, why its extension is DAO, Im not clear.
public class DistrictGateway
{
#region private variable
private DatabaseManager _databaseManager;
#endregion
#region Constructor
public DistrictGateway(DatabaseManager databaseManager) {
_databaseManager = databaseManager;
}
#endregion
#region private methods
private void SetDistrictToList(List<DistrictDAO> dataTable, int index, DistrictDAO district){
// here is some code for inserting
}
#endregion
#region public methods
try
{
/*
query and rest of the codes
*/
}
catch (SqlException sqlException)
{
Console.WriteLine(sqlException.Message);
throw;
}
catch (FormatException formateException)
{
Console.WriteLine(formateException.Message);
throw;
}
finally {
_databaseManager.ConnectToDatabase();
}
public void InsertDistrict() {
// all query to insert object
}
public void UpdateDistrict() {
}
#endregion
}
DistrictGateway class responsible for database query handling
Now the business layer.
public class District
{
public string Id { get; set; }
public string DistrictName { get; set; }
public string CountryId { get; set; }
}
public class DistrictManager
{
#region private variable
private DatabaseManager _databaseManager;
private DistrictGateway _districtGateway;
#endregion
#region Constructor
public DistrictManager() {
// Instantiate the private variable using utitlity classes
}
#endregion
#region private method
private District TransformDistrictBLLToDL(DistrictDAO districtDAO) {
// return converted district with lots of coding here
}
private DistrictDAO TransformDistrictDLToBLL(District district)
{
// return converted DistrictDAO with lots of coding here
}
private List<District> TransformDistrictBLLToDL(List<DistrictDAO> districtDAOList)
{
// return converted district with lots of coding here
}
private List<DistrictDAO> TransformDistrictDLToBLL(List<District> district)
{
// return converted DistrictDAO with lots of coding here
}
#endregion
#region public methods
public List<District> GetDistrict() {
try
{
_databaseManager.ConnectToDatabase();
return TransformDistrictBLLToDL( _districtGateway.GetDistrict());
}
catch (SqlException sqlException)
{
Console.WriteLine(sqlException.Message);
throw;
}
catch (FormatException formateException)
{
Console.WriteLine(formateException.Message);
throw;
}
finally {
_databaseManager.ConnectToDatabase();
}
}
#endregion
This is the code for the business layer.
My questions are:
Is it a perfect design?
If not, what are flaws here?
I think, this code with duplicated try catch block
What can be good design for this implementation
Perfect? No such thing. If you have to ask here, it's probably wrong. And even if it's "perfect" right now, it won't be once time and entropy get ahold of it.
The measure of how well you did will come when it's time to extend it. If your changes slide right in, you did well. If you feel like you're fighting legacy code to add changes, figure out what you did wrong and refactor it.
Flaws? It's hard to tell. I don't have the energy, time, or motivation to dig very deeply right now.
Can't figure out what you mean by #3.
The typical layering would look like this, with the arrows showing dependencies:
view <- controller -> service +-> model <- persistence (service knows about persistence)
There are cross-cutting concerns for each layer:
view knows about presentation, styling, and localization. It does whatever validation is possible to improve user experience, but doesn't include business rules.
controller is intimately tied to view. It cares about binding and validation of requests from view, routing to the appropriate service, error handling, and routing to the next view. That's it. The business logic belongs in the service, because you want it to be the same for web, tablet, mobile, etc.
service is where the business logic lives. It worries about validation according to business rules and collaborating with model and persistence layers to fulfill use cases. It knows about use cases, units of work, and transactions.
model objects can be value objects if you prefer a more functional style or be given richer business logic if you're so inclined.
persistence isolates all database interactions.
You can consider cross-cutting concerns like security, transactions, monitoring, logging, etc. as aspects if you use a framework like Spring that includes aspect-oriented programming.
Though, you aren't really asking a specific question here, it seems you may just need some general guidance to get you going on the right path. Since we don't have an in-depth view of the application as a whole as you do, it would be odd enough to suggest a single methodology for you.
n-tier architecture seems to be a popular question recently, but it sparked me to write a blog series on it. Check these SO questions, and blog posts. I think they will greatly help you.
Implement a Save method for my object
When Building an N-Tier application, how should I organize my names spaces?
Blog Series on N-Tier Architecture (with example code)
http://www.dcomproductions.com/blog/2011/09/n-tier-architecture-best-practices-part-1-overview/
For a big project i would recommend the MVVM pattern so you will be able to test your code fully, and later it will be much easier to extend it or change parts of it. Even you will be able to change the UI,without changing the code in the other layers.
If your job is to refactor the code then first of all ask your boss if you should really, really should just refactor it or add functionality to it. In both cases you need an automated test harness around that code. If you are lucky and you should add functionality to it, then you at least have a starting point and a goal. Otherwise you will have to pick the starting point by yourself and do not have a goal. You can refactor code endlessly. That can be quite frustrating without a goal.
Refactoring code without tests a recipe for disaster. Refactoring code means improving its structure without changing its behavior. If you do not make any tests, you cannot be sure that you did not break something. Since you need to test regularly and a lot, then these tests must be automated. Otherwise you spend too much time with manual testing.
Legacy code is hard to press into some test harness. You will need to modify it in order to get it testable. Your effort to wrap tests around the code will implicitly lead to some layered structure of code.
Now there is the hen and egg problem: You need to refactor the code in order to test it, but you have no tests right now. The answer is to start with “defensive” refactor techniques and do manual testing. You can find more details about these techniques in Micheal Feather's book Working Effectively with Legacy Code. If you need to refactor a lot of legacy code, you should really read it. It is a real eye opener.
To your questions:
There is no perfect design. There are only potentially better ones.
If the application does not have any unit tests, then this is the biggest flaw. Introduce tests first. On the other hand: Those code snippets are not that bad at all. It seems that DistrictDAO something like the technical version of District. Maybe there was some attempt to introduce some domain model. And: At least DistrictGateway gets the DatabaseManager injected as constructor parameter. I have seen worse.
Yes, the try-catch blocks can be seen as code duplicates, but that is nothing unusual. You can try to reduce the catch clauses with a sensible choice of Exception classes. Or you can use delegates or use some AOP techniques, but that will make the code less readable. For more see this other question.
Fit the legacy code into some test harness. A better design will implicitly emerge.
Any way: First of all clarify what your boss means with refactoring code. Just refactoring code without some goal is not productive and will not make the boss happy.
Assuming I have a list of financial transactions, I have a need to execute a list of validation rules against those transactions. An example would be I have a transaction to purchase a product, however first I need to validate that the account in the transaction has enough available funds, that the product is not sold out etc. As a result of these many rules the transaction will be marked as rejected, as well as an error code should be specified.
Naturally I am thinking towards fronting my rules with an interface, allowing the executing code to roll through the rules executing each one until the first one rejects the transaction.
Each rule will require to be configured with parameters (ex. ValidateMinimumBalance will need to know that minimumBalance = 30). The result of a rule executing can be as simple as settings the rejection code on the transaction object, and the error code; or it can be as complicated as automatically modifying multiple other properties of the transaction.
My basic understanding of design patterns points to me either Strategy or Command patterns, but I am not entirely sure which one is better suited for this scenario.
Command Pattern
Each command will implement some sort of IValidate interface
The constructor of the command will take an instance of the transaction as the receiver in order to be able to read/validate the transaction as well as modify aspects of it. The constructor will also take an array of key/value pairs as parameters for the validation logic.
When I try to picture how the Strategy Pattern fits this scenario it looks very similar. In most examples the strategy is a simple object with a single method, however in my case the strategy will need a reference to the transaction as well as validation parameters.
Strategy is more used to swap out algorithms, its not really used for chaining validations. If you are going to have a pattern where you have one validation per type then you could use the strategy, if you are finding your having to use multiple validators, or the need to reuse validators. I think you are going to have to either find a new way to do it (aka COR) or within your strategy use the COR.
I actually would answer other. I think a combination chain of responsibility pattern and the composite pattern, or decorator for validators is much more suited for your needs.
Typing up an example implementation now.. but at a high level
Chain of Responsiblity
The design would revolve around something like:
abstract class Handler
{
protected Handler next;
public Handler(Handler h){
this.next = h;
}
public abstract bool Validate(Request request);
public abstract void Handle(Request request);
}
class CoreLogic: Handler
{
public CoreLogic(Handler handle) : base(handle){
}
public override void Validate(Request request){
return True
}
public override void Handle(Request request){
if(this.Validate(request)){
if(next!= null){
next.Handle(request);
}
}
}
}
class ValidBalance: Handler
{
public ValidBalance(Handler handle) : base(handle){
}
public override void Validate(Request request){
return True
}
public override void Handle(Request request){
if(this.Validate(request)){
if(next!= null){
next.Handle(request);
}
}
}
}
class MainApp
{
static void Main(){
Handler h = new ValidateBalance( new CoreLogic(null));
h.Handle(new Request());
}
}
Other useful links:
Chain of Responsiblity wikipedia
A Strategy would be something use to 'parameterize' a Command (telling it how parts of the operation should be executed).
When I try to picture how the Strategy Pattern fits this scenario it looks very similar.
Similar? It should look identical.
The distinction is one of how the context and delegation works. In principle a Command is the "active" agent. A Strategy is injected into some active agent. That distinction is pretty subtle.
It barely changes the design. What does change is the expectation.
Command objects (more-or-less) stand alone. They're built to do their work, and then they can vanish. No one cares about them any more. Perhaps they also use the Memento pattern, and have some future life, but perhaps not.
Strategy objects (more-or-less) live with the object into which they're injected. A Strategy would be part of some larger object, and could be replaced by a different implementation without breaking or changing anything else.
But the essential interface is largely the same.
In most examples the strategy is a simple object with a single method,
Those are poor examples.
however in my case the strategy will need a reference to the transaction as well as validation parameters.
Not unusual. Nothing wrong with it.
but I am not entirely sure which one
is better suited for this scenario
Neither :)
I strongly recommend to look at Interpreter. Actually your validator rules are just predicates formulated for your transactions. It's quite possible that soon you will need to combine these rules with AND, OR, NOT, etc.