I'm currently using an IoC container, unity, for my program.
I have multiple chained factories. One calling the next to create an object it needs for populating a property. All the factories use the same raw data object in order to build their respective objects. The raw data object describes how to create all the various objects. Currently each factory has a Create method that takes in a couple parameters to state what location the object represents.
My problem is how/where do I pass in the raw data object to each factory in order for them to do their jobs?
Injecting the object into the Create() methods seems to be more procedural than object oriented. However if I inject the object into each factory's constructor then how would I resolve each factory correctly. Not to mention that these factories need to be able to work on different raw data objects. Maybe there is a better architecture over all?
Below represents the type of structure I have, minus passing the raw object anywhere.
class PhysicalObjectFactory
{
private readonly StructureAFactory _structureAFactory;
private readonly Parser _parser;
public PhysicalObjectFactory(StructureAFactory structureAFactory, Parser _parser)
{
_structureAFactory = structureAFactory;
this._parser = _parser;
}
public PhysicalObject CreatePhysicalObject()
{
RawDataObject rawDataObject = _parser.GetFromFile("foo.txt");
// do stuff
PhysicalObject physicalObject = new PhysicalObject();
physicalObject.StructureA = _structureAFactory.Create(num1, num2);
// do more stuff
return physicalObject;
}
}
class StructureAFactory
{
private readonly StructureBFactory _structureBFactory;
public StructureAFactory(StructureBFactory structureBFactory)
{
_structureBFactory = structureBFactory;
}
public StructureA Create(int a, int b)
{
// do stuff
StructureA structureA = new StructureA();
structureA.StructureB = _structureBFactory.Create(num76, num33);
// do more stuff
return structureA;
}
}
class StructureBFactory
{
public StructureBFactory(){}
public StructureB Create(int a, int b)
{
StructureB structureB = new StructureB();
// do stuff
return structureB;
}
}
My problem is how/where do I pass in the raw data object to each
factory in order for them to do their jobs?
In general you should pass in runtime data through methods and compile-time/design-time/configuration data through constructor injection.
Your services are composed at a different moment in time as when they are used. Those services can live for a long time and this means they can be used many times with different runtime values. If you make this distinction between runtime data and data that doesn't change throughout the lifetime of the service, your options become much clearer.
So the question is whether this raw data you're passing in is changing on each call or if its fixed. Perhaps it is partially fixed. In that case you should separate the the data; only pass the runtime data on through the Create methods. It seems obvious that since the factories are chained, the data they need to create that part of the object is passed on to them through their Create method.
Sometimes however, you've got some data that's in between. It is data that will change during the lifetime of the application, but do don't want to pass it on through method calls, because it's not up to the caller to determine what those values are. This is contextual information. A clear example of this is information about the logged in user that is executing the request. You don't want the caller (for instance your presentation layer) to pass that information on, since this is extra work, and a potential security risk if the presentation layer forgets to pass this information on, or accidentally passes on some invalid value.
In that case the most common solution is to inject a service that provides consumers with this information. In the case of the user information you would inject an IUserContext service that contains a UserName or UserId property, perhaps a IsInRole(string) method or something similar. The trick here is that not the user information is injected into a consumer, but a service that allows access to this information. In other words, the retrieval of the user information is deferred. This allows the composed object graph to stay independent of those contextual information. This makes it easier to compose and validate object graph.
Related
I'm working on a multi-tenant web application. At the beginning of each request, the web application will use either the URL or a cookie to determine the tenant ID of the current request. I want to make this available to my data layer so that it can automatically include the ID in SQL queries without having to explicitly pass it down through the layers for each query. However, I don't want my data layer to be dependent on System.Web or to only be usable by a web application.
So I've created an interface in the data layer ITenantIDProvider
public interface ITenantIDProvider
{
Guid GetTenantID();
}
And it could be implemented something like this:
public class TenantIDProvider : ITenantIDProvider
{
public Guid GetFacilityID()
{
return (Guid)HttpContext.Current.Items["TenantID"];
}
}
In reality it will be something more complex than just retrieving from Items but the key is that it will be using the current request. There is also a business logic layer in between but it just forwards the ID along so I left that code out for simplicity.
When the web application starts it passes a new instance of TenantIDProvider to the data layer which passes it into an instance of a custom EntityFramework IDbContextInterceptor:
//Static method in data layer called at application start up
public static void SetTenantIDProvider(ITenantIDProvider tenantIDProvider)
{
System.Data.Entity.Infrastructure.Interception.DbInterception.Add(new MyContextInterceptor(tenantIDProvider));
}
DbInterception.Add only gets called once at application startup so I believe it's storing the given interceptor in a static collection, which means my interceptor instance is common to all requests. However, since my tenantIDProvider is just wrapping access to the current request (and importantly, not setting any instance variables) I shouldn't need to worry about a race condition right? I've attempted to debug through two threads, freezing them at different times to test it and it seems to work as intended. I just feel a little uneasy about an instance of an object that is shared by all threads/requests trying to access request-specific data.
I need to access my business layer object 4 times with different constructor.
Specifically I need to access 4 different back end systems through my separate Data Access Layer
What should i do:
1) Instantiate 4 separate objects with different constructor?
2) Instantiate one object and change the public property every time?
As i am now in my HomeController i have the following:
var obj = new BarcodeBLL(new ERPConfig
{
AS400ControlLibrary = ConfigurationManager.AppSettings["ControlLibrary"],
AS400Library = ConfigurationManager.AppSettings["DataLibrary"],
ConnectionString = ConfigurationManager.ConnectionStrings["AS400"].ConnectionString
});
To me it would seem obvious to follow #2 but i would like to know if i am correct and why
If you have 4 identical systems, it would seem logical to have a single class representing such systems. When you need access to one of these systems, you instantiate this type, passing the correct connection string to the constructor.
You may want to hide the details of which connection string is actually being used behind a factory or in the configuration of a DI container.
I have multiple business objects in my application (C#, Winforms, WinXP). When the user executes some action on the UI, each of these objects are modified and updated by different parts of the application. After each modification, I need to first check what has changed and then log these changes made to the object. The purpose of logging this is to create a comprehensive tracking of activity going on in the application.
Many among these objects contain contain lists of other objects and this nesting can be several levels deep. The 2 main requirements for any solution would be
capture changes as accurately as possible
keep performance cost to minimum.
eg of a business object:
public class MainClass1
{
public MainClass1()
{
detailCollection1 = new ClassDetailCollection1();
detailCollection2 = new ClassDetailCollection2();
}
private Int64 id;
public Int64 ID
{
get { return id; }
set { id = value; }
}
private DateTime timeStamp;
public DateTime TimeStamp
{
get { return timeStamp; }
set { timeStamp = value; }
}
private string category = string.Empty;
public string Category
{
get { return category; }
set { category = value; }
}
private string action = string.Empty;
public string Action
{
get { return action; }
set { action = value; }
}
private ClassDetailCollection1 detailCollection1;
public ClassDetailCollection1 DetailCollection1
{
get { return detailCollection1; }
}
private ClassDetailCollection2 detailCollection2;
public ClassDetailCollection2 DetailCollection2
{
get { return detailCollection2; }
}
//more collections here
}
public class ClassDetailCollection1
{
private List<DetailType1> detailType1Collection;
public List<DetailType1> DetailType1Collection
{
get { return detailType1Collection; }
}
private List<DetailType2> detailType2Collection;
public List<DetailType2> DetailType2Collection
{
get { return detailType2Collection; }
}
}
public class ClassDetailCollection2
{
private List<DetailType3> detailType3Collection;
public List<DetailType3> DetailType3Collection
{
get { return detailType3Collection; }
}
private List<DetailType4> detailType4Collection;
public List<DetailType4> DetailType4Collection
{
get { return detailType4Collection; }
}
}
//more other Types like MainClass1 above...
I can assume that I will have access to the old values and new values of the object.
In that case I can think of 2 ways to try to do this without being told what has explicitly changed.
use reflection and iterate thru all properties of the object and compare
those with the corresponding
properties of the older object. Log
any properties that have changed. This
approach seems to be more flexible, in
that I would not have to worry if any
new properties are added to any of the
objects. But it also seems performance
heavy.
Log changes in the setter of all the properties for all the objects.
Other than the fact that this will
need me to change a lot of code, it
seems more brute force. This will be
maintenance heavy and inflexible if
some one updates any of the Object
Types. But this way it may also be
preformance light since I will not
need to check what changed and log
exactly what properties are changed.
Suggestions for any better approaches and/or improvements to above approaches are welcome
I developed a system like this a few years ago. The idea was to track changes to an object and store those changes in a database, like version control for objects.
The best approach is called Aspect-Oriented Programming, or AOP. You inject "advice" into the setters and getters (actually all method execution, getters and setters are just special methods) allowing you to "intercept" actions taken on the objects. Look into Spring.NET or PostSharp for .NET AOP solutions.
I may not be able to give you a good answer, but I will tell you that in the overwhelming majority of cases, option 1 is NOT a good answer. We're dealing with a very similar reflective "graph-walker" in our project; seemed like a good idea at the time, but it is a nightmare, for the following reasons:
You know the object changed, but without a high level of knowledge in the reflective "change handling" class about the workings of objects above it, you may not know why. If that information is important to you, you have to give it to the change handler, most l;ikely through a field or property on the domain object, requiring changes to your domain and imparting knowledge to the domain about the business logic.
Changes can affect multiple objects, but logs for changes at every level may not be desired; for instance, the client may not want to see a change to a Borrower's outstanding loan count in the log when a new Loan is approved, but they do want to see changes due to consolidations. Managing rules about logging in these cases requires change handling classes to know about more of the structure than just one object, which can very quickly make a change-handling object VERY big, and VERY brittle.
The requirements of your graph walker are probably more than you know; if your object graph includes backreferences or cross-references, the walker must know where it's been, and the simplest comprehensive way to do that is to keep a list of objects it's processed, and check the current object against those it's handled before processing it (making anti-backtracking an N^2 operation). It must also not consider changes to objects in the graph that will not be persisted when you persist the top level (references that are not "cascaded"). NHibernate gives you the ability to plug into its own graph-walker and abide by the cascade rukles in your mappings, which helps, but if you're using a roll-your-own DAL, or you DO want to log changes to objects that NHibernate won't cascade to, you're going to have to set this all up yourself.
A piece of logic in a handler may make a change that requires an update to a "parent" object (updating a calculated field, perhaps). Now, you have to go back and re-evaluate the changed object if the change is of interest to another piece of the change handling logic.
If you have logic that requires creation and persistence of a new object, you must do one of two things; attach the new object to the graph somewhere (where it may or may not be picked up by the walker), or persist the new object in its own transaction (if you're using an ORM, the object CANNOT reference an object from the other graph with a "cascade" setting that will cause it to be saved first).
Finally, being highly reflective in both walking the graph and finding the "handlers" for a particular object, passing a complex tree into such a framework is a guaranteed speed bump in your application.
I think you'll save yourself a lot of headaches if you skip the "change handler" reflective pattern, and include the creation of audit logs or any pre-persistence logic in the "unit of work" you're performing up at the business layer, through a set of "audit loggers". This allows the logic making the changes to employ an algorithm selection pattern such as Command or Strategy to tell your audit framework exactly what kind of change is happening, so it can pick the logger that will produce the required logging messages.
See here how adempiere did the changelog: http://wiki.adempiere.net/Change_Log
I'm currently refactoring some code on a project that is wrapping up, and I ended up putting a lot of business logic into service classes rather than in the domain objects. At this point most of the domain objects are data containers only. I had decided to write most of the business logic in service objects, and refactor everything afterwards into better, more reuseable, and more readable shapes. That way I could decide what code should be placed into domain objects, and which code should be spun off into new objects of their own, and what code should be left in a service class. So I have some code:
public decimal CaculateBatchTotal(VendorApplicationBatch batch)
{
IList<VendorApplication> applications = AppRepo.GetByBatchId(batch.Id);
if (applications == null || applications.Count == 0)
throw new ArgumentException("There were no applications for this batch, that shouldn't be possible");
decimal total = 0m;
foreach (VendorApplication app in applications)
total += app.Amount;
return total;
}
This code seems like it would make a good addition to a domain object, because it's only input parameter is the domain object itself. Seems like a perfect candidate for some refactoring. But the only problem is that this object calls another object's repository. Which makes me want to leave it in the service class.
My questions are thus:
Where would you put this code?
Would you break this function up?
Where would someone who's following strict Domain-Driven design put it?
Why?
Thanks for your time.
Edit Note: Can't use an ORM on this one, so I can't use a lazy loading solution.
Edit Note2: I can't alter the constructor to take in parameters, because of how the would-be data layer instantiates the domain objects using reflection (not my idea).
Edit Note3: I don't believe that a batch object should be able to total just any list of applications, it seems like it should only be able to total applications that are in that particular batch. Otherwise, it makes more sense to me to leave the function in the service class.
You shouldn't even have access to the repositories from the domain object.
What you can do is either let the service give the domain object the appropriate info or have a delegate in the domain object which is set by a service or in the constructor.
public DomainObject(delegate getApplicationsByBatchID)
{
...
}
I'm no expert on DDD but I remember an article from the great Jeremy Miller that answered this very question for me. You would typically want logic related to your domain objects - inside those objects, but your service class would exec the methods that contain this logic. This helped me push domain specific logic into the entity classes, and keep my service classes less bulky (as I found myself putting to much logic inside the service classes like you mentioned)
Edit: Example
I use the enterprise library for simple validation, so in the entity class I will set an attribute like so:
[StringLengthValidator(1, 100)]
public string Username {
get { return mUsername; }
set { mUsername = value; }
}
The entity inherits from a base class that has the following "IsValid" method that will ensure each object meets the validation criteria
public bool IsValid()
{
mResults = new ValidationResults();
Validate(mResults);
return mResults.IsValid();
}
[SelfValidation()]
public virtual void Validate(ValidationResults results)
{
if (!object.ReferenceEquals(this.GetType(), typeof(BusinessBase<T>))) {
Validator validator = ValidationFactory.CreateValidator(this.GetType());
results.AddAllResults(validator.Validate(this));
}
//before we return the bool value, if we have any validation results map them into the
//broken rules property so the parent class can display them to the end user
if (!results.IsValid()) {
mBrokenRules = new List<BrokenRule>();
foreach (Microsoft.Practices.EnterpriseLibrary.Validation.ValidationResult result in results) {
mRule = new BrokenRule();
mRule.Message = result.Message;
mRule.PropertyName = result.Key.ToString();
mBrokenRules.Add(mRule);
}
}
}
Next we need to execute this "IsValid" method in the service class save method, like so:
public void SaveUser(User UserObject)
{
if (UserObject.IsValid()) {
mRepository.SaveUser(UserObject);
}
}
A more complex example might be a bank account. The deposit logic will live inside the account object, but the service class will call this method.
Why not pass in an IList<VendorApplication> as the parameter instead of a VendorApplicationBatch? The calling code for this presumably would come from a service which would have access to the AppRepo. That way your repository access will be up where it belongs while your domain function can remain blissfully ignorant of where that data came from.
As I understand it (not enough info to know if this is the right design) VendorApplicationBatch should contain a lazy loaded IList inside the domain object, and the logic should stay in the domain.
For Example (air code):
public class VendorApplicationBatch {
private IList<VendorApplication> Applications {get; set;};
public decimal CaculateBatchTotal()
{
if (Applications == null || Applications.Count == 0)
throw new ArgumentException("There were no applications for this batch, that shouldn't be possible");
decimal Total = 0m;
foreach (VendorApplication App in Applications)
Total += App.Amount;
return Total;
}
}
This is easily done with an ORM like NHibernate and I think it would be the best solution.
It seems to me that your CalculateTotal is a service for collections of VendorApplication's, and that returning the collection of VendorApplication's for a Batch fits naturally as a property of the Batch class. So some other service/controller/whatever would retrieve the appropriate collection of VendorApplication's from a batch and pass them to the VendorApplicationTotalCalculator service (or something similar). But that may break some DDD aggregate root service rules or some such thing I'm ignorant of (DDD novice).
The current system that I am working on makes use of Castle Activerecord to provide ORM (Object Relational Mapping) between the Domain objects and the database. This is all well and good and at most times actually works well!
The problem comes about with Castle Activerecords support for asynchronous execution, well, more specifically the SessionScope that manages the session that objects belong to. Long story short, bad stuff happens!
We are therefore looking for a way to easily convert (think automagically) from the Domain objects (who know that a DB exists and care) to the DTO object (who know nothing about the DB and care not for sessions, mapping attributes or all thing ORM).
Does anyone have suggestions on doing this. For the start I am looking for a basic One to One mapping of object. Domain object Person will be mapped to say PersonDTO. I do not want to do this manually since it is a waste.
Obviously reflection comes to mind, but I am hoping with some of the better IT knowledge floating around this site that "cooler" will be suggested.
Oh, I am working in C#, the ORM objects as said before a mapped with Castle ActiveRecord.
Example code:
By #ajmastrean's request I have linked to an example that I have (badly) mocked together. The example has a capture form, capture form controller, domain objects, activerecord repository and an async helper. It is slightly big (3MB) because I included the ActiveRecored dll's needed to get it running. You will need to create a database called ActiveRecordAsync on your local machine or just change the .config file.
Basic details of example:
The Capture Form
The capture form has a reference to the contoller
private CompanyCaptureController MyController { get; set; }
On initialise of the form it calls MyController.Load()
private void InitForm ()
{
MyController = new CompanyCaptureController(this);
MyController.Load();
}
This will return back to a method called LoadComplete()
public void LoadCompleted (Company loadCompany)
{
_context.Post(delegate
{
CurrentItem = loadCompany;
bindingSource.DataSource = CurrentItem;
bindingSource.ResetCurrentItem();
//TOTO: This line will thow the exception since the session scope used to fetch loadCompany is now gone.
grdEmployees.DataSource = loadCompany.Employees;
}, null);
}
}
this is where the "bad stuff" occurs, since we are using the child list of Company that is set as Lazy load.
The Controller
The controller has a Load method that was called from the form, it then calls the Asyc helper to asynchronously call the LoadCompany method and then return to the Capture form's LoadComplete method.
public void Load ()
{
new AsyncListLoad<Company>().BeginLoad(LoadCompany, Form.LoadCompleted);
}
The LoadCompany() method simply makes use of the Repository to find a know company.
public Company LoadCompany()
{
return ActiveRecordRepository<Company>.Find(Setup.company.Identifier);
}
The rest of the example is rather generic, it has two domain classes which inherit from a base class, a setup file to instert some data and the repository to provide the ActiveRecordMediator abilities.
I solved a problem very similar to this where I copied the data out of a lot of older web service contracts into WCF data contracts. I created a number of methods that had signatures like this:
public static T ChangeType<S, T>(this S source) where T : class, new()
The first time this method (or any of the other overloads) executes for two types, it looks at the properties of each type, and decides which ones exist in both based on name and type. It takes this 'member intersection' and uses the DynamicMethod class to emil the IL to copy the source type to the target type, then it caches the resulting delegate in a threadsafe static dictionary.
Once the delegate is created, it's obscenely fast and I have provided other overloads to pass in a delegate to copy over properties that don't match the intersection criteria:
public static T ChangeType<S, T>(this S source, Action<S, T> additionalOperations) where T : class, new()
... so you could do this for your Person to PersonDTO example:
Person p = new Person( /* set whatever */);
PersonDTO = p.ChangeType<Person, PersonDTO>();
And any properties on both Person and PersonDTO (again, that have the same name and type) would be copied by a runtime emitted method and any subsequent calls would not have to be emitted, but would reuse the same emitted code for those types in that order (i.e. copying PersonDTO to Person would also incur a hit to emit the code).
It's too much code to post, but if you are interested I will make the effort to upload a sample to SkyDrive and post the link here.
Richard
use ValueInjecter, with it you can map anything to anything e.g.
object <-> object
object <-> Form/WebForm
DataReader -> object
and it has cool features like: flattening and unflattening
the download contains lots of samples
You should automapper that I've blogged about here:
http://januszstabik.blogspot.com/2010/04/automatically-map-your-heavyweight-orm.html#links
As long as the properties are named the same on both your objects automapper will handle it.
My apologies for not really putting the details in here, but a basic OO approach would be to make the DTO a member of the ActiveRecord class and have the ActiveRecord delegate the accessors and mutators to the DTO. You could use code generation or refactoring tools to build the DTO classes pretty quickly from the AcitveRecord classes.
Actually I got totally confussed now.
Because you are saying: "We are therefore looking for a way to easily convert (think automagically) from the Domain objects (who know that a DB exists and care) to the DTO object (who know nothing about the DB and care not for sessions, mapping attributes or all thing ORM)."
Domain objects know and care about DB? Isn't that the whole point of domain objects to contain business logic ONLY and be totally unaware of DB and ORM?....You HAVE to have these objects? You just need to FIX them if they contain all that stuff...that's why I am a bit confused how DTO's come into picture
Could you provide more details on what problems you're facing with lazy loading?