Right now I’m working on a very big banking solution developed in VB6. The application is massively form-based and lacks a layered architecture (all the code for data access, business logic and form manipulation is in the single form class). My job is now to refactor this code. I'm writing a proper business logic layer and data access layer in C# and the form will remain in VB.
Here are code snippets:
public class DistrictDAO
{
public string Id{get;set;}
public string DistrictName { get; set; }
public string CountryId { get; set; }
public DateTime SetDate { get; set; }
public string UserName { get; set; }
public char StatusFlag { get; set; }
}
District Entity class, why its extension is DAO, Im not clear.
public class DistrictGateway
{
#region private variable
private DatabaseManager _databaseManager;
#endregion
#region Constructor
public DistrictGateway(DatabaseManager databaseManager) {
_databaseManager = databaseManager;
}
#endregion
#region private methods
private void SetDistrictToList(List<DistrictDAO> dataTable, int index, DistrictDAO district){
// here is some code for inserting
}
#endregion
#region public methods
try
{
/*
query and rest of the codes
*/
}
catch (SqlException sqlException)
{
Console.WriteLine(sqlException.Message);
throw;
}
catch (FormatException formateException)
{
Console.WriteLine(formateException.Message);
throw;
}
finally {
_databaseManager.ConnectToDatabase();
}
public void InsertDistrict() {
// all query to insert object
}
public void UpdateDistrict() {
}
#endregion
}
DistrictGateway class responsible for database query handling
Now the business layer.
public class District
{
public string Id { get; set; }
public string DistrictName { get; set; }
public string CountryId { get; set; }
}
public class DistrictManager
{
#region private variable
private DatabaseManager _databaseManager;
private DistrictGateway _districtGateway;
#endregion
#region Constructor
public DistrictManager() {
// Instantiate the private variable using utitlity classes
}
#endregion
#region private method
private District TransformDistrictBLLToDL(DistrictDAO districtDAO) {
// return converted district with lots of coding here
}
private DistrictDAO TransformDistrictDLToBLL(District district)
{
// return converted DistrictDAO with lots of coding here
}
private List<District> TransformDistrictBLLToDL(List<DistrictDAO> districtDAOList)
{
// return converted district with lots of coding here
}
private List<DistrictDAO> TransformDistrictDLToBLL(List<District> district)
{
// return converted DistrictDAO with lots of coding here
}
#endregion
#region public methods
public List<District> GetDistrict() {
try
{
_databaseManager.ConnectToDatabase();
return TransformDistrictBLLToDL( _districtGateway.GetDistrict());
}
catch (SqlException sqlException)
{
Console.WriteLine(sqlException.Message);
throw;
}
catch (FormatException formateException)
{
Console.WriteLine(formateException.Message);
throw;
}
finally {
_databaseManager.ConnectToDatabase();
}
}
#endregion
This is the code for the business layer.
My questions are:
Is it a perfect design?
If not, what are flaws here?
I think, this code with duplicated try catch block
What can be good design for this implementation
Perfect? No such thing. If you have to ask here, it's probably wrong. And even if it's "perfect" right now, it won't be once time and entropy get ahold of it.
The measure of how well you did will come when it's time to extend it. If your changes slide right in, you did well. If you feel like you're fighting legacy code to add changes, figure out what you did wrong and refactor it.
Flaws? It's hard to tell. I don't have the energy, time, or motivation to dig very deeply right now.
Can't figure out what you mean by #3.
The typical layering would look like this, with the arrows showing dependencies:
view <- controller -> service +-> model <- persistence (service knows about persistence)
There are cross-cutting concerns for each layer:
view knows about presentation, styling, and localization. It does whatever validation is possible to improve user experience, but doesn't include business rules.
controller is intimately tied to view. It cares about binding and validation of requests from view, routing to the appropriate service, error handling, and routing to the next view. That's it. The business logic belongs in the service, because you want it to be the same for web, tablet, mobile, etc.
service is where the business logic lives. It worries about validation according to business rules and collaborating with model and persistence layers to fulfill use cases. It knows about use cases, units of work, and transactions.
model objects can be value objects if you prefer a more functional style or be given richer business logic if you're so inclined.
persistence isolates all database interactions.
You can consider cross-cutting concerns like security, transactions, monitoring, logging, etc. as aspects if you use a framework like Spring that includes aspect-oriented programming.
Though, you aren't really asking a specific question here, it seems you may just need some general guidance to get you going on the right path. Since we don't have an in-depth view of the application as a whole as you do, it would be odd enough to suggest a single methodology for you.
n-tier architecture seems to be a popular question recently, but it sparked me to write a blog series on it. Check these SO questions, and blog posts. I think they will greatly help you.
Implement a Save method for my object
When Building an N-Tier application, how should I organize my names spaces?
Blog Series on N-Tier Architecture (with example code)
http://www.dcomproductions.com/blog/2011/09/n-tier-architecture-best-practices-part-1-overview/
For a big project i would recommend the MVVM pattern so you will be able to test your code fully, and later it will be much easier to extend it or change parts of it. Even you will be able to change the UI,without changing the code in the other layers.
If your job is to refactor the code then first of all ask your boss if you should really, really should just refactor it or add functionality to it. In both cases you need an automated test harness around that code. If you are lucky and you should add functionality to it, then you at least have a starting point and a goal. Otherwise you will have to pick the starting point by yourself and do not have a goal. You can refactor code endlessly. That can be quite frustrating without a goal.
Refactoring code without tests a recipe for disaster. Refactoring code means improving its structure without changing its behavior. If you do not make any tests, you cannot be sure that you did not break something. Since you need to test regularly and a lot, then these tests must be automated. Otherwise you spend too much time with manual testing.
Legacy code is hard to press into some test harness. You will need to modify it in order to get it testable. Your effort to wrap tests around the code will implicitly lead to some layered structure of code.
Now there is the hen and egg problem: You need to refactor the code in order to test it, but you have no tests right now. The answer is to start with “defensive” refactor techniques and do manual testing. You can find more details about these techniques in Micheal Feather's book Working Effectively with Legacy Code. If you need to refactor a lot of legacy code, you should really read it. It is a real eye opener.
To your questions:
There is no perfect design. There are only potentially better ones.
If the application does not have any unit tests, then this is the biggest flaw. Introduce tests first. On the other hand: Those code snippets are not that bad at all. It seems that DistrictDAO something like the technical version of District. Maybe there was some attempt to introduce some domain model. And: At least DistrictGateway gets the DatabaseManager injected as constructor parameter. I have seen worse.
Yes, the try-catch blocks can be seen as code duplicates, but that is nothing unusual. You can try to reduce the catch clauses with a sensible choice of Exception classes. Or you can use delegates or use some AOP techniques, but that will make the code less readable. For more see this other question.
Fit the legacy code into some test harness. A better design will implicitly emerge.
Any way: First of all clarify what your boss means with refactoring code. Just refactoring code without some goal is not productive and will not make the boss happy.
Related
I have a concrete idea of a structure but I can't identify any pattern to this. So I guess that I do some thing that I should avoid and do in a different way.
My application will control multiple devices (all of the same type) which have multiple communication interfaces and multiple sensors (it's a simplified example to demonstrate the concept!).
Now, below you can find the example code. Now let's only focus on the "Device" class. This is some kind of a man in the middle that doesn't provide any own functionality but only implements other classes.
This sounds for me like a "Facade". But the difference is that a facade implemenets other classes as private instances and provide functions, where in my example instead I declare the implemented instances as public to let the user access them directly.
Achievement:
The (in my real case high) number of services provided by the "Device" get splitted into specific topics (here e.g. "CommunicationServices" and "MeasurementServices"). This should help the user to gain a better orientation over the code.
So, is there an pattern (which I simply can't identify) that represents this implementation below?
Or would that still be called a "Facade"?
class Application()
{
List<IDevice> _listOfDevices = new List<IDevice>;
readonly Device.Factory _deviceFactory;
Application(Device.Factory df)
{
_deviceFactory = df;
}
void DoSomething()
{
// e.g. instantiate 2 devices
_listOfDevices.Add(_deviceFactory);
_listOfDevices.Add(_deviceFactory);
foreach(IDevice device in _listOfDevices)
{
int temperature = device.MeasurementServices.TemperatureSensor.ReadTemperature();
device.CommunicationServices.Wifi.SendMessage(temperature);
//... and so on
}
}
}
public class Device : IDevice
{
public delegate Device Factory();
public ICommunicationServices CommunicationServices { get; }
public IMeasurementServices MeasurementServices { get; }
public Device (ICommunicationServices comServices, IMeasurementServices measurementServices)
{
CommunicationServices = comServices;
MeasurementServices = measurementServices;
}
}
public class CommunicationServices : ICommunicationServices
{
IBluetooth Bluetooth { get; }
IWifi Wifi { get; }
ISerial Serial { get; }
// ... more interfaces
public CommunicationServices(IBluetooth bt, IWifi wf, ISerial sr)
{
Bluetooth = bt;
Wifi = wf;
Serial = sr;
}
}
public class MeasurementServices : IMeasurementServices
{
ITemperatureSensor TemperatureSensor { get; }
IHumiditySensor HumiditySensor { get; }
// ... more sensors
public MeasurementServices (ITemperatureSensor ts, IHumiditySensor hs)
{
TemperatureSensor = ts;
HumiditySensor = hs;
}
}
Added after receiving the first input:
Mark wrote: "The hierarchy that makes sense to you may not fit the mental model that other people have".
Well this is always going to be a problem, thinking about setting up a data structure on a data server that every one is satisfied with is simply impossible.
So the alternative is to define an accessor for each data of the device in the device's interface?
That would be for example:
// Interface that is going to have a huge number of accessors...
public interface IDevice
{
string Device.GetSsid();
void Device.SetSsid(string ssid);
int Device.GetLoggerInterval();
void Device.SetLoggerInterval(int interval_ms);
// ...
}
Talking in a hirarchy
SSID is part of "CommunicationServices => Wifi => Settings"
Interval is part of "MeasurementSerivces => Logger => Settings"
The issue which I'm concerned about is just represented in this example: the two data "SSID" and "Interval" are very different topics but would appear aside each other. This doesn't really make it easy to learn the code either.
Or what other approaches are out there to face this issue "train wracking" vs. "single huge interface"? Maybe a mix of both (that would be an inconsequent solution)?
Even if you have a clear vision of the structure of the code, it doesn't have to be a design pattern. Some code is just code, and some common code structures are rather antipatterns (or code smells) than patterns.
I agree that this doesn't look like a Facade. If anything, it looks more like a Train Wreck - a code smell. Train Wrecks violate the Law of Demeter. This 'law', however, is controversial:
"I'd prefer it to be called the Occasionally Useful Suggestion of Demeter."
― Martin Fowler
Over the years, I've come to increasingly agree with Martin Fowler that this 'law' may not be all that. The OP, however, asks whether the proposed design fits a particular design pattern. I don't think that it does, but I take the liberty to expand the topic slightly to also include various named design principles.
Whether or not you consider the Law of Demeter a proper design principle, I would challenge that the proposed design meets the stated objective:
This should help the user to gain a better orientation over the code.
I would argue that it does the opposite. The design makes it harder to learn and use the code.
This question is about C# code, and the way that most C# developers interact with an unfamiliar library is via IntelliSense. Given an object device of type Device, they'd typically start typing a dot (.) after device to see what options they have. IntelliSense will give them a GUI (an advanced drop-down control) that enumerates the instance members of Device. (Phil Trelford calls this dot-driven development).
So if you type device. (notice the trailing dot), you'll be presented with a list of other objects:
CommunicationServices
MeasurementServices
etc.
When you're 'dotting into' an object, you're typically looking for some behaviour - a method to invoke. None of the sub-objects are methods, so you're essentially guaranteed that the first dot never produces a useful member.
Users will have to 'dot into' one of the sub-objects to see if the behaviour they're looking for is there. They may, for example, 'dot into' CommunicationServices and type another dot to see if the behaviour they're looking for should hang off of it. If it doesn't, they have to delete the CommunicationServices property access that the IDE just 'helpfully' created for them, and try the next one.
I've worked with APIs like that, and I understand that they're supposed to be helpful, but they're not - they're exasperating.
You should be wary of introducing hierarchies to help people. There's rarely only one single way to model a given problem domain as a hierarchy, and the hierarchy that makes sense to you may not fit the mental model that other people have.
It'd be more helpful to users to present all members directly on Device so that programmers need only 'dot' once.
If you feel that you have too many members on Device this might be another code smell, but I can't tell from the OP.
I have an class which can perform many analytics on a given object and return back sets of results:
public class AnalyserClass
{
private SomeObject _someObject;
public AnalyserClass(SomeObject someobject)
{
_someObject = someobject;
}
public IEnumerable<Result> DoA
{
//checks A on someObject and returns some results
}
public IEnumerable<Result> DoB
{
//checks B on someObject and returns some results
}
//etc
}
public class Result
{
//various properties with result information
}
public class SomeObject
{
//this is the object which is analysed
}
I would like to expose these actions (DoA, DoB etc) in a CheckedListBox in a WinForm. The user would then tick the actions s/he wants performed and would then click on a Run button.
I would ideally like exposing the actions to be dynamic - so, if I develop a new action within my AnalyserClass, it will automatically show up and be executable from the WinForm without any code changes anywhere else.
I am a fairly new C# programmer. I have been researching how best to structure this and I have become a little bit confused between various patterns and which one would be most appropriate to use.
First of all I read up on the MVVM pattern, but this seems to be more complicated than is required here and I don't understand what the Model would be.
Then I looked at the Command pattern. But from what I understand, I would have to create a class wrapper for every single action (there are a lots) which would be quite time consuming and seem to be a bit cumbersome (change code in multiple places, so not 'dynamic'). I also don't understand how I could build the list of checkboxes from the command classes. This does seem to be the most appropriate pattern that I could find, but I am uncertain about it because of my lack of experience.
Your guidance is much appreciated.
I would not choose Reflection here, because it makes the things unnecessary complicated.
Furthermore, with your current approach, you would need to extend your AnalyserClass with new functionality every time you need a new analyzer tool, and that:
breaks the "open-closed" principle of SOLID,
breaks the "single responsibility" principle of SOLID,
makes your class too large and pretty unmaintainable.
I would introduce in your AnalyserClass a collection of supported actions:
class AnalyserClass
{
public IEnumerable<IAnalyzer> Analyzers { get; private set; }
}
...where the IAnalyzer interface describes your actions:
interface IAnalyzer
{
string Description { get; } // this is what user will see as the action name
Result Perform(SomeObject input);
}
Then you can implement the IAnalyzer in various classes as needed, even in different modules etc.
The only open point would be - how to add all the IAnalyzer instances into your AnalyzerClass.Analyzers collection?
Well:
you can use a DI framework (e.g. MEF) and let it discover all the things automatically,
you can inject them manually via DI,
you can use Reflection and scan the types manually,
you can add them manually e.g. in the constructor of the AnalyzerClass (simple but not recommended)
and so on...
I have written a Windows Forms application and now I want to write some unit tests for it (not exactly test driven development seeing as I am writing the tests after I have developed but better late then never!) My question is that with such an application how do you go about writing the unit tests, given that nearly all of the methods and events are private? I have heard of NUnit Forms but I hear good and bad things about it, also there has been no real development on that project for a while so it looks abandoned. Also is it generally accepted that the project have have adequate unit testing in place if I wrote unit test cases for all of the events that a user would trigger by clicking/ pressing buttons, or would I have to go and write unit test cases for all methods and figure out a way to test my private methods?
EDIT: My business logic is seperated from my presentation logic, there is 1 or 2 public methods my business logic exposes so the form can access them, but what about all the private methods that are in the business logic?
The key to Unit Testing graphical applications is to make sure that all most all of the business logic is in a separate class and not in the code behind.
Design patterns like Model View Presenter and Model View Controller can help when designing such a system.
To give an example:
public partial class Form1 : Form, IMyView
{
MyPresenter Presenter;
public Form1()
{
InitializeComponent();
Presenter = new MyPresenter(this);
}
public string SomeData
{
get
{
throw new NotImplementedException();
}
set
{
MyTextBox.Text = value;
}
}
private void button1_Click(object sender, EventArgs e)
{
Presenter.ChangeData();
}
}
public interface IMyView
{
string SomeData { get; set; }
}
public class MyPresenter
{
private IMyView View { get; set; }
public MyPresenter(IMyView view)
{
View = view;
View.SomeData = "test string";
}
public void ChangeData()
{
View.SomeData = "Some changed data";
}
}
As you can see, the Form only has some infrastructure code to thy everything together. All your logic is inside your Presenter class which only knows about a View Interface.
If you want to unit test this you can use a Mocking tool like Rhino Mocks to mock the View interface and pass that to your presenter.
[TestMethod]
public void TestChangeData()
{
IMyView view = MockRepository.DynamickMock<IMyView>();
view.Stub(v => v.SomeData).PropertyBehavior();
MyPresenter presenter = new MyPresenter(view);
presenter.ChangeData();
Assert.AreEqual("Some changed data", view.SomeData);
}
The first thing I would do is to ensure that you have proper separation of your business logic from your form. Basically, using an MVC pattern. Then, you can easily test everything outside the form, as if the form didn't even exist.
Now, this could still leave some untested form-specific functionality. I.E., is the form wired-up to the service correctly? For this, then you could still consider something like NUnit Forms or another alternative.
Break out all business logic into a separate project and unit test that. Or at least move all logic from the forms into separate classes.
You have a few options.
Use a tool like Coded UI to test via your user interface. This isn't a great option, because it's slower than unit testing and the tests tend to be more brittle.
Separate your business logic from your presentation logic. If you have a lot of private methods performing business logic in your UI, you've tightly coupled your business logic to your presentation. Start identifying these and moving them out to separate classes with public interfaces that you can test. Read up on SOLID principles, which can help you keep your code loosely coupled and testable.
Unit testing the View is simple enough using approvaltests ( www.approvaltests.com or nuget). there is a video here: http://www.youtube.com/watch?v=hKeKBjoSfJ8
However, it also seems like you are worried about making a function default or public for the purposes of being able to test functionality.
These are usually referred to as seams; ways to get into your code for testing.
and they are good. Sometime people confuse private/public with security, and are afraid to turn a private function public, but reflection will call either, so it's not really secure. Other times people are worried about the API interface to a class. But this only matters if you have a public API, and if you have a winform app, it is probably meant to be the top level (no other consumers are calling it.)
You are the programmer, and as such can design your code to be easy to test. This usually means little more than changing a few methods public and creating a few connivence methods that allow dependences to be passed in.
For example:
buttonclick += (o,e)=> {/*somecode*/};
is very hard to test.
private void button1_Click(object sender, EventArgs e) {/*somecode*/}
still hard to test
public void button1_Click(object sender, EventArgs e) {/*somecode*/}
easier to test
private void button1_Click(object sender, EventArgs e) { DoSave();}
public void DoSave(){/*somecode*/}
Really easy to Test!
This goes double if you need some information from the event. ie.
public void ZoomInto(int x, int y)
is much easier to test that the corresponding mouse click event, and the passthrough call can still be a single ignorable line.
One may employ the MVVM (Model–View–ViewModel) pattern with Reactive.UI, to author testable WinForms code. To get the separation of concerns really need. See: Reactive.UI https://reactiveui.net/ The main downside of using Winforms / MVVM / Reactive.UI is that there are not a lot of examples of its use (for WinForms). The upside is that it is applicable to just about all desktop frameworks and languages. You learn it for one, but the principles apply for all. When you have lots of private methods, that's OK. IMHO: try to use public methods to begin a business process you want to test. You can use tell-don't-ask: https://martinfowler.com/bliki/TellDontAsk.html and still keep all those methods private.
One may also test the code by driving the UI but this is not so highly recommended, because the resultant tests are (1) very fragile, (2) harder to get working, and IMHO, (3) can't be written at the same level of fine granuality as pure code tests; (4) Finally: if you use a database, you will need to consider populating it with test data, and, because your database must be in a clean, well-defined state before each test, (5) your tests may run even slower than your thought as you reinitialize the data for each test.
Summary: Author your code with good SoC (e.g. by applying MVVM), then your code will have far better testability.
I have some debugging functions that I would like to refactor, but seeing as they are debugging functions, it seems like they would be less likely to follow proper design. They pretty much reach into the depths of the app to mess with things.
The main form of my app has a menu containing the debug functions, and I catch the events in the form code. Currently, the methods ask for a particular object in the application, if it's not null, and then mess with it. I'm trying to refactor so that I can remove the reference to this object everywhere, and use an interface for it instead (the interface is shared by many other objects which have no relation to the debugging features.)
As a simplified example, imagine I have this logic code:
public class Logic
{
public SpecificState SpecificState { get; private set; }
public IGenericState GenericState { get; private set; }
}
And this form code:
private void DebugMethod_Click(object sender, EventArgs e)
{
if (myLogic.SpecificState != null)
{
myLogic.SpecificState.MessWithStuff();
}
}
So I'm trying to get rid of the SpecificState reference. It's been eradicated from everywhere else in the app, but I can't think of how to rewrite the debug functions. Should they move their implementation into the Logic class? If so, what then? It would be a complete waste to put the many MessWithStuff methods into IGenericState as the other classes would all have empty implementations.
edit
Over the course of the application's life, many IGenericState instances come and go. It's a DFA / strategy pattern kind of thing. But only one implementation has debug functionality.
Aside: Is there another term for "debug" in this context, referring to test-only features? "Debug" usually just refers to the process of fixing things, so it's hard to search for this stuff.
Create a separate interface to hold the debug functions, such as:
public interface IDebugState
{
void ToggleDebugMode(bool enabled); // Or whatever your debug can do
}
You then have two choices, you can either inject IDebugState the same way you inject IGenericState, as in:
public class Logic
{
public IGenericState GenericState { get; private set; }
public IDebugState DebugState { get; private set; }
}
Or, if you're looking for a quicker solution, you can simply do an interface test in your debug-sensitive methods:
private void DebugMethod_Click(object sender, EventArgs e)
{
var debugState = myLogic.GenericState as IDebugState;
if (debugState != null)
debugState.ToggleDebugMode(true);
}
This conforms just fine with DI principles because you're not actually creating any dependency here, just testing to see if you already have one - and you're still relying on abstractions over concretions.
Internally, of course, you still have your SpecificState implementing both IGenericState and IDebugState, so there's only ever one instance - but that's up to your IoC container, none of your dependent classes need know about it.
I'd highly recommend reading Ninject's walkthrough of dependency injection (be sure to read through the entire tutorial). I know this may seem like a strange recommendation given your question; however, I think this will save you a lot of time in the long run and keep your code cleaner.
Your debug code seems to depend on SpecificState; therefore, I would expect that your debug menu items would ask the DI container for their dependencies, or a provider that can return the dependency or null. If you're already working on refactoring to include DI, then providing your debug menu items with the proper internal bits of your application as dependencies (via the DI container) seems to be an appropriate way to achieve that without breaking solid design principles. So, for instance:
public sealed class DebugMenuItem : ToolStripMenuItem
{
private SpecificStateProvider _prov;
public DebugMenuItem(SpecificStateProvider prov) : base("Debug Item")
{
_prov = prov;
}
// other stuff here
protected override void OnClick(EventArgs e)
{
base.OnClick(e);
SpecificState state = _prov.GetState();
if(state != null)
state.MessWithStuff();
}
}
This assumes that an instance of SpecificState isn't always available, and needs to be accessed through a provider that may return null. By the way, this technique does have the added benefit of fewer event handlers in your form.
As an aside, I'd recommend against violating design principles for the sake of debugging, and have your debug "muck with stuff" methods interact with your internal classes the same way any other piece of code must - by its interface "contract". You'll save yourself a headache =)
I'd be inclined to look at dependency injection and decorators for relatively large apps, as FMM has suggested, but for smaller apps you could make a relatively easy extension to your existing code.
I assume that you push an instance of Logic down to the parts of your app somehow - either though static classes or fields or by passing into the constructor.
I would then extend Logic with this interface:
public interface ILogicDebugger
{
IDisposable PublishDebugger<T>(T debugger);
T GetFirstOrDefaultDebugger<T>();
IEnumerable<T> GetAllDebuggers<T>();
void CallDebuggers<T>(Action<T> call);
}
Then deep down inside your code some class that you want to debug would call this code:
var subscription =
logic.PublishDebugger(new MessWithStuffHere(/* with params */));
Now in your top-level code you can call something like this:
var debugger = logic.GetFirstOrDefaultDebugger<MessWithStuffHere>();
if (debugger != null)
{
debugger.Execute();
}
A shorter way to call methods on your debug class would be to use CallDebuggers like this:
logic.CallDebuggers<MessWithStuffHere>(x => x.Execute());
Back, deep down in your code, when your class that you're debugging is about to go out of scope, you would call this code to remove its debugger:
subscription.Dispose();
Does that work for you?
I've came across a dilemma which I think is worth discussing here.
I have a set of domain objects (you can also call them entities, if you like), which get some data from a separate DAL which is resolved with an IoC.
I was thinking about making my system very extensible, and I'm wandering if it is right to also resolve these entities by the IoC.
Let me present a dumb example.
Let's say I have a web site for which I have the following interface:
public interface IArticleData
{
int ID { get; }
string Text { get; set; }
}
The concept is, that the DAL implements such interfaces, and also a generic IDataProvider<TData> inteface, after which the DAL becomes easily replaceable. And there is the following class, which uses it:
public class Article
{
private IArticleData Data { get; set; }
public int ID
{
get { return Data.ID; }
}
public int Text
{
get { return Data.Text; }
set { Data.Text = value; }
}
private Article(IArticleData data)
{
Data = data;
}
public static FindByID(int id)
{
IDataProvider<IArticleData> provider = IoC.Resolve<IDataProvider<IArticleData>>();
return new Article(provider.FindByID(id));
}
}
This makes the entire system independent of the actual DAL implementation (which would be in the example, IDataProvider<IArticleData>).
Then imagine a situation in which this functionality is not really enough, and I'd like to extend it. In the above example, I don't have any options to do it, but if I make it implement an interface:
public interface IArticle
{
int ID { get; }
string Text { get; set; }
}
public class Article : IArticle
{
...
}
And then, I remove all dependencies to the Article class and start resolving it as a transient IArticle component with an IoC.
For example, in Castle: <component id="ArticleEntity" service="IArticle" type="Article" lifestyle="transient" />
After this, if I have to extend it, that would be this simple:
public class MyArticle : Article
{
public string MyProperty { ..... }
}
And all I have to do is change the configuration to this: <component id="ArticleEntity" service="IArticle" type="Article" lifestyle="transient" />
So anyone who would use the system in question would be able to replace all classes as simply as rewriting a line in the configuration. All the other entities would work correctly also, because the new one would implement the same functionality as the old one.
By the way, this seems to be a good solution for the "separation of concerns" philosophy.
My question is, is this the right thing to do?
After some serious thinking, I couldn't figure out any better way to do this. I also considered MEF, but it seems to be oriented to making plugins but not to replace or extend already complete parts of a system like this.
I read many SO questions (and also other sources) about the topic, the most notable are these:
How should I handle my Entity/Domain Objects using IoC/Dependency Injection? and
IoC, Where do you put the container?
And I'm also afraid that I'm falling to the problems described on the following pages:
http://martinfowler.com/bliki/AnemicDomainModel.html and
http://hendryluk.wordpress.com/2008/05/10/should-domain-entity-be-managed-by-ioc/
And one more thing: this would increase the testability of the entire system, isn't it?
What do you think?
EDIT:
Another option would be to create a Factory pattern for these entities, but IoC.Resolve<IArticle> is way simpler than IoC.Resolve<IArticleFactory>().CreateInstance()
I think you may be overcomplicating things. Would you ever have a need to replace Article with another type that implemented IArticle?
IoC containers are best used when you have a higher-level component that depends on a lower-level component, and you want the higher-level component to depend on an abstraction of that component because the lower-level component performs some operations internally that make it difficult to test the higher-level component e.g. database access. Or the lower-level component might represent a particular strategy in your application that can be interchangeable with other strategies e.g. a database gateway that abstracts out the details of working with vendor-specific database APIs.
As Article is a simple, POCO-style class, it's unlikely that you would gain any benefits creating instances of it though an IoC container.