I have multiple business objects in my application (C#, Winforms, WinXP). When the user executes some action on the UI, each of these objects are modified and updated by different parts of the application. After each modification, I need to first check what has changed and then log these changes made to the object. The purpose of logging this is to create a comprehensive tracking of activity going on in the application.
Many among these objects contain contain lists of other objects and this nesting can be several levels deep. The 2 main requirements for any solution would be
capture changes as accurately as possible
keep performance cost to minimum.
eg of a business object:
public class MainClass1
{
public MainClass1()
{
detailCollection1 = new ClassDetailCollection1();
detailCollection2 = new ClassDetailCollection2();
}
private Int64 id;
public Int64 ID
{
get { return id; }
set { id = value; }
}
private DateTime timeStamp;
public DateTime TimeStamp
{
get { return timeStamp; }
set { timeStamp = value; }
}
private string category = string.Empty;
public string Category
{
get { return category; }
set { category = value; }
}
private string action = string.Empty;
public string Action
{
get { return action; }
set { action = value; }
}
private ClassDetailCollection1 detailCollection1;
public ClassDetailCollection1 DetailCollection1
{
get { return detailCollection1; }
}
private ClassDetailCollection2 detailCollection2;
public ClassDetailCollection2 DetailCollection2
{
get { return detailCollection2; }
}
//more collections here
}
public class ClassDetailCollection1
{
private List<DetailType1> detailType1Collection;
public List<DetailType1> DetailType1Collection
{
get { return detailType1Collection; }
}
private List<DetailType2> detailType2Collection;
public List<DetailType2> DetailType2Collection
{
get { return detailType2Collection; }
}
}
public class ClassDetailCollection2
{
private List<DetailType3> detailType3Collection;
public List<DetailType3> DetailType3Collection
{
get { return detailType3Collection; }
}
private List<DetailType4> detailType4Collection;
public List<DetailType4> DetailType4Collection
{
get { return detailType4Collection; }
}
}
//more other Types like MainClass1 above...
I can assume that I will have access to the old values and new values of the object.
In that case I can think of 2 ways to try to do this without being told what has explicitly changed.
use reflection and iterate thru all properties of the object and compare
those with the corresponding
properties of the older object. Log
any properties that have changed. This
approach seems to be more flexible, in
that I would not have to worry if any
new properties are added to any of the
objects. But it also seems performance
heavy.
Log changes in the setter of all the properties for all the objects.
Other than the fact that this will
need me to change a lot of code, it
seems more brute force. This will be
maintenance heavy and inflexible if
some one updates any of the Object
Types. But this way it may also be
preformance light since I will not
need to check what changed and log
exactly what properties are changed.
Suggestions for any better approaches and/or improvements to above approaches are welcome
I developed a system like this a few years ago. The idea was to track changes to an object and store those changes in a database, like version control for objects.
The best approach is called Aspect-Oriented Programming, or AOP. You inject "advice" into the setters and getters (actually all method execution, getters and setters are just special methods) allowing you to "intercept" actions taken on the objects. Look into Spring.NET or PostSharp for .NET AOP solutions.
I may not be able to give you a good answer, but I will tell you that in the overwhelming majority of cases, option 1 is NOT a good answer. We're dealing with a very similar reflective "graph-walker" in our project; seemed like a good idea at the time, but it is a nightmare, for the following reasons:
You know the object changed, but without a high level of knowledge in the reflective "change handling" class about the workings of objects above it, you may not know why. If that information is important to you, you have to give it to the change handler, most l;ikely through a field or property on the domain object, requiring changes to your domain and imparting knowledge to the domain about the business logic.
Changes can affect multiple objects, but logs for changes at every level may not be desired; for instance, the client may not want to see a change to a Borrower's outstanding loan count in the log when a new Loan is approved, but they do want to see changes due to consolidations. Managing rules about logging in these cases requires change handling classes to know about more of the structure than just one object, which can very quickly make a change-handling object VERY big, and VERY brittle.
The requirements of your graph walker are probably more than you know; if your object graph includes backreferences or cross-references, the walker must know where it's been, and the simplest comprehensive way to do that is to keep a list of objects it's processed, and check the current object against those it's handled before processing it (making anti-backtracking an N^2 operation). It must also not consider changes to objects in the graph that will not be persisted when you persist the top level (references that are not "cascaded"). NHibernate gives you the ability to plug into its own graph-walker and abide by the cascade rukles in your mappings, which helps, but if you're using a roll-your-own DAL, or you DO want to log changes to objects that NHibernate won't cascade to, you're going to have to set this all up yourself.
A piece of logic in a handler may make a change that requires an update to a "parent" object (updating a calculated field, perhaps). Now, you have to go back and re-evaluate the changed object if the change is of interest to another piece of the change handling logic.
If you have logic that requires creation and persistence of a new object, you must do one of two things; attach the new object to the graph somewhere (where it may or may not be picked up by the walker), or persist the new object in its own transaction (if you're using an ORM, the object CANNOT reference an object from the other graph with a "cascade" setting that will cause it to be saved first).
Finally, being highly reflective in both walking the graph and finding the "handlers" for a particular object, passing a complex tree into such a framework is a guaranteed speed bump in your application.
I think you'll save yourself a lot of headaches if you skip the "change handler" reflective pattern, and include the creation of audit logs or any pre-persistence logic in the "unit of work" you're performing up at the business layer, through a set of "audit loggers". This allows the logic making the changes to employ an algorithm selection pattern such as Command or Strategy to tell your audit framework exactly what kind of change is happening, so it can pick the logger that will produce the required logging messages.
See here how adempiere did the changelog: http://wiki.adempiere.net/Change_Log
Related
I will use Airbnb as an example.
When you sign up an Airbnb account, you can become a host by creating a listing. To create a listing, Airbnb UI guides you through the process of creating a new listing in multiple steps:
It will also remember your furthest step you've been, so next time when you want to resume the process, it will redirect to where you left.
I've been struggling to decide whether I should put the listing as the aggregate root, and define methods as available steps, or treat each step as their own aggregate roots so that they're small?
Listing as Aggregate Root
public sealed class Listing : AggregateRoot
{
private List<Photo> _photos;
public Host Host { get; private set; }
public PropertyAddress PropertyAddress { get; private set; }
public Geolocation Geolocation { get; private set; }
public Pricing Pricing { get; private set; }
public IReadonlyList Photos => _photos.AsReadOnly();
public ListingStep LastStep { get; private set; }
public ListingStatus Status { get; private set; }
private Listing(Host host, PropertyAddress propertyAddress)
{
this.Host = host;
this.PropertyAddress = propertyAddress;
this.LastStep = ListingStep.GeolocationAdjustment;
this.Status = ListingStatus.Draft;
_photos = new List<Photo>();
}
public static Listing Create(Host host, PropertyAddress propertyAddress)
{
// validations
// ...
return new Listing(host, propertyAddress);
}
public void AdjustLocation(Geolocation newGeolocation)
{
// validations
// ...
if (this.Status != ListingStatus.Draft || this.LastStep < ListingStep.GeolocationAdjustment)
{
throw new InvalidOperationException();
}
this.Geolocation = newGeolocation;
}
...
}
Most of the complex classes in the aggregate root are just value objects, and ListingStatus is just a simple enum:
public enum ListingStatus : int
{
Draft = 1,
Published = 2,
Unlisted = 3,
Deleted = 4
}
But ListingStep could be an enumeration class that stores the next step the current step can advance:
using Ardalis.SmartEnum;
public abstract class ListingStep : SmartEnum<ListingStep>
{
public static readonly ListingStep GeolocationAdjustment = new GeolocationAdjustmentStep();
public static readonly ListingStep Amenities = new AmenitiesStep();
...
private ListingStep(string name, int value) : base(name, value) { }
public abstract ListingStep Next();
private sealed class GeolocationAdjustmentStep : ListingStep
{
public GeolocationAdjustmentStep() :base("Geolocation Adjustment", 1) { }
public override ListingStep Next()
{
return ListingStep.Amenities;
}
}
private sealed class AmenitiesStep : ListingStep
{
public AmenitiesStep () :base("Amenities", 2) { }
public override ListingStep Next()
{
return ListingStep.Photos;
}
}
...
}
The benefits of having everything in the listing aggregate root is that everything would be ensured to have transaction consistency. And the steps are defined as one of the domain concerns.
The drawback is that the aggregate root is huge. On each step, in order to call the listing actions, you have to load up the listing aggregate root, which contains everything.
To me, it sounds like except the geolocation adjustment might depend on the property address, other steps don't depend on each other. For example, the title and the description of the listing doesn't care what photos you upload.
So I was thinking whether I can treat each step as their own aggregate roots?
Each step as own Aggregate Root
public sealed class Listing : AggregateRoot
{
public Host Host { get; private set; }
public PropertyAddress PropertyAddress { get; private set; }
private Listing(Host host, PropertyAddress propertyAddress)
{
this.Host = host;
this.PropertyAddress = propertyAddress;
}
public static Listing Create(Host host, PropertyAddress propertyAddress)
{
// Validations
// ...
return new Listing(host, propertyAddress);
}
}
public sealed class ListingGeolocation : AggregateRoot
{
public Guid ListingId { get; private set; }
public Geolocation Geolocation { get; private set; }
private ListingGeolocation(Guid listingId, Geolocation geolocation)
{
this.ListingId = listingId;
this.Geolocation = geolocation;
}
public static ListingGeolocation Create(Guid listingId, Geolocation geolocation)
{
// Validations
// ...
return new ListingGeolocation(listingId, geolocation);
}
}
...
The benefits of having each step as own aggregate root is that it makes aggregate roots small (To some extends I even feel like they're too small!) so when they're persisted back to data storage, the performance should be quicker.
The drawback is that I lost the transactional consistency of the listing aggregate. For example, the listing geolocation aggregate only references the listing by the Id. I don't know if I should put a listing value object there instead so that I can more information useful in the context, like the last step, listing status, etc.
Close as Opinion-based?
I can't find any example online where it shows how to model this wizard-like style in DDD. Also most examples I've found about splitting a huge aggregate roots into multiple smaller ones are about one-to-many relationships, but my example here is mostly about one-to-one relationship (except photos probably).
I think my question would not be opinion-based, because
There are only finite ways to go about modeling aggregates in DDD
I've introduced a concrete business model airbnb, as an example.
I've listed 2 approaches I've been thinking.
You can suggest me which approach you would take and why, or other approaches different from the two I listed and the reasons.
Let's discuss a couple of reasons to split up a large-cluster aggregate:
Transactional issues in multi-user environments.
In our case, there's only one Host managing the Listing. Only reviews could be posted by other users. Modelling Review as a separate aggregate allows transactional consistency on the root Listing.
Performance and scalability.
As always, it depends on your specific use case and needs. Although, once the Listing has been created, you would usually query the entire listing in order to present it to the user (apart from perhaps a collapsed reviews section).
Now let's have a look at the candidates for value objects (requiring no identity):
Location
Amenities
Description and title
Settings
Availability
Price
Remember there are advantages to limiting internal parts as value objects. For one, it greatly reduces overall complexity.
As for the wizard part, the key take away is that the current step needs to be remembered:
..., so next time when you want to resume the process, it will redirect to where you left.
As aggregates are conceptually a unit of persistence, resuming where you left off will require us to persist partially hydrated aggregates. You could indeed store a ListingStep on the aggregate, but does that really make sense from a domain perspective? Do the Amenities need to be specified before the Description and Title? Is this really a concern for the Listing aggregate or can this perhaps be moved to a Service? When all Listings are created through the use of the same Service, this Service could easily determine where it left off last time.
Pulling this wizard approach into the domain model feels like a violation of the Separation of Concerns principle. The B&B domain experts might very well be indifferent concerning the wizard flow.
Taking all of the above into account, the Listing as aggregate root seems like a good place to start.
UPDATE
I thought about the wizard being the concept of the UI, rather than of the domain, because in theory, since each step doesn't depend on others, you can finish any step in any order.
Indeed, the steps being independent is a clear indication that there's no real invariant, posed by the aggregate, on the order the data is entered. In this case, it's not even a domain concern.
I have no problem modeling those steps as their own aggregate roots, and have the UI determine where it left off last time.
The wizard steps (pages) shouldn't map to aggregates of their own. Following DDD, user actions will typically be forwarded to an Application API/Service, which in turn can delegate work to domain objects and services. The Application Service is only concerned with technical/infrastructure stuff (eg persistence), where as the domain objects and services hold the rich domain logic and knowledge. This often referred to as the Onion or Hexagonal architecture. Note that the dependencies point inward, so the domain model depends on nothing else, and knows about nothing else.
Another way to think about wizards is that these are basically data collectors. Often at the last step some sort of processing is done, but all steps before that usually just collect data. You could use this feature to wrap all data when the user closes the wizard (prematurely), send it to the Application API and then hydrate the aggregate and persist it until next time the user comes round. That way you only need to perform basic validation on the pages, but no real domain logic is involved.
My only concern of that approach is that, when all the steps are filled in and the listing is ready to be reviewed and published, who's responsible for it? I thought about the listing aggregate, but it doesn't have all the information.
This is where the Application Service, as a delegator of work, comes into play. By itself it holds no real domain knowledge, but it "knows" all the players involved and can delegate work to them. It's not an unbound context (no pun intended), as you want to keep the transactional scope limited to one aggregate at a time. If not, you'll have to resort to two stage commits, but that's another story.
To wrap it up, you could store the ListingStatus on Listing and make the invariant behind it a responsibility of the root aggregate. As such, it should have all the information, or be provided with it, to update the ListingStatus accordingly. In other words, it's not about the wizard steps, it's about the nouns and verbs that describe the processes behind the aggregate. In this case, the invariant that guards all data is entered and that it is currently in a correct state to be published. From then on, it's illegal to return to, and persist, the aggregate with only partial state or in an incoherent manner.
Like any other aggregate. It shouldn't care if you collect the needed data in a multistep wizard or in just one screen. It's a UI issue, gathering the data and passing it to the domain at the end of the wizard.
You're trying to design your system based on the UI (the wizard step)!
In Domain-Driven Design you shouldn't really care about the UI (which is a technical detail),
you should look for the bounded contexts, invariants, etc.
For Example:
Listing bounded-context: property and guests, location, amenities, description and title
Booking bounded-context: booking settings, calendar and availability, pricing
Review bounded-context:
the listing doesn't have to be a global one,
you can display the listings for which you have all required information from the 'Listing context' and are availability for the search period, etc.
In my experience, DDD was a design methodology that came from a culture of what we'd now call Java backend data modeling. Modern web development has matured and evolved quite a bit since then with Angular/React/Vue frameworks that have their own paradigms about data modeling. Coming from a UX background, I'll elaborate on how to structure UI components that integrate with DDD models.
Separate data from presentation
MVC design works here. Naively, the end result of this workflow is the construction of a Listing domain model. But, I'm sure AirBnB's domain model for a listing is much more complex. Let's approximate that by considering each "step" as a form that constructs independent models. To simplify, let's only consider models for Photo and Location.
Class Photo: Class Location:
id guid
src geolocation
Provide a view for each model
Think of these UI components as "form" models that should work outside the context of a wizard. All of their fields are nullable, which represent incomplete steps. As an invariant, a view is valid iff it can construct a valid instance of the associated model.
Class PhotoView: Class LocationView:
id guid
src geolocation
valid { get } valid { get }
Define the Controller
Now, consider a View-Model WizardView to help orchestrate the independent views into "Wizard" behavior. We already have the independent views taking care of "valid/invalid" state. Now we just need an idea of "current" step. In the AirBnb UX, it seems like the "current" step is more of a "selected" state where the list item is expanded and all others are collapsed. Either way, a full page transition or "selected" represents the same state of "this step is active <-> all others are inactive." If _selected is null, traverse steps[] for the first invalid step, otherwise, null <--> all valid.
A StepView could display a whole page or, in the case of AirBnb, a single list item, where status == view.valid.
Class WizardView: Class StepView:
steps[] title
_selected view
selected { get set } status { get }
addStep(StepView)
submit()
The submit() represents whatever handling you want to trigger when all steps are valid and the domain models can be constructed. Notice how I've deferred the actual creation of any real domain model and only maintained "form" or "draft" data structures in the views. Only at the time of submit(), either on button press or as a callback to when the "all valid" event occurs, do these views bubble up data, most likely to make server request. You can construct a higher level Listing model here and make that your request payload. However, it is not the Wizard's job to communicate with the backend. It simply pools all the data together for a proper handler to construct a valid request.
Why? Ideally, the frontend should speak the same domain model that the backend does. At the very least your UX models should match one-to-one to high level aggregates. The idea for the frontend is to interface with a high-level layer of abstraction that the backend is not likely to change, while giving it the freedom to decompose and restructure that data in whatever internal domain it needs to. In practice, the frontend and backend domains get out of sync, so it's better leave a layer for data-munging at the request level so that the UX is internally consistent and coherent.
Consider below class being updated in database
public class ProductionLineItem
{
public int Id { get; set; }
public DateTime ProductionDate { get; set; }
public string HandledBy { get; set; }
public DateTime DateToMarket { get; set; }
}
void UpdateProductionRecord(ProductionLineItem existingRecord, ProductionLineItem modifiedRecord)
{
existingRecord.Id = modifiedRecord.Id;
existingRecord.ProductionDate = modifiedRecord.ProductionDate;
existingRecord.HandledBy = modifiedRecord.HandledBy;
existingRecord.DateToMarket = modifiedRecord.DateToMarket;
}
Customer wants to keep a log of all changed properties in dedicated table.
I should be doing something like this:
void UpdateProductionRecordWithLog(ProductionLineItem existingRecord, ProductionLineItem modifiedRecord)
{
existingRecord.Id = modifiedRecord.Id;
if (existingRecord.ProductionDate != modifiedRecord.ProductionDate)
{
existingRecord.ProductionDate = modifiedRecord.ProductionDate;
//Log: productionDate update form xyz to abc
}
if (existingRecord.HandledBy != modifiedRecord.HandledBy)
{
existingRecord.HandledBy = modifiedRecord.HandledBy;
//Log: HandledBy updated from Mr. John to Mr. Smith
}
if (existingRecord.DateToMarket != modifiedRecord.DateToMarket)
{
existingRecord.DateToMarket = modifiedRecord.DateToMarket;
//Log: DateToMarket updated form 2013 to 2014
}
}
For small number of properties it should be fine, but if properties goes beyond 15-20. I believe this would not be best way to do it.
Can I make my code more clean? I am open to use any framework like AutoMapper or so, If needed.
There are multiple elegant solutions to your problem, some of those include:
You could use Aspect Oriented Programming (AOP, for frameworks see this answer) to capture every modification to a property. You could save those changes for later retrival or invoke events which are then logged.
You could put Reflection (e.g. PropertyInfo) to good use here and iterate over all properties and compare the current value. This will spare you from writing all properties by hand.
Reflection and Attributes in conjunction with the Properties which are needed to be logged will work too. Using Attributes as a kind of post-it note on those properties which are important to be logged.
Be aware that Reflection might impose some performance penalities.
Do you use Entity Framework? It supports INotifypropertychanged, which could be used:
How to raise an event on Property Change?
If not, your classes could implement INotifyPropertyChanged() themselves - while not great (you have to write geteers / setters explicitly), it provides a better decoupling than invoking a loggin facility in the Properties directly (what if, if your logging is not available).
I would be worried about performance issues, so I might store logs and only write once in a while...
Well first you've done more than the requirement, in that you are only changing Existing item's properties if they are different.
Adding some new method to your class e.g. LogDifferences(ProductLineItem old, ProductLineItem new) and calling it from UpdateProductionItem would be good.
Personally I'd being going back to the Customer and saying what are you really trying to do and why, what they asked for smacks more of solution than requirement.
E.g. just log old record new record, like a DB transaction log. Do the an analysis of what changed when it's required.
One last possiblilty, that admittedly might cause more problems than it solves, is storing the values of the properties in say a Dictionary<String,dynamic> instead of discrete members.
Then logging changes based on Existing["ChangedToMarket"] = Modified["ChangedToMarket"] is fairly trival.
From my experience many validation frameworks in .NET allow you to validate a single field at a time for doing things like ensuring a field is a postal code or email address for instance. I usually call these within-field edits.
In my project we often have to do between-field-edits though. For instance, if you have a class like this:
public class Range
{
public int Min { get; set; }
public int Max { get; set; }
}
you might want to ensure that Max is greater than Min. You might also want to do some validation against an external object. For instance given you have a class like this:
public class Person
{
public string PostalCode { get; set; }
}
and for whatever reason you want to ensure that Postal Code exists in a database or a file provided to you. I have more complex examples like where a user provides a data dictionary and you want to validate your object against that data dictionary.
My question is: can we use any of the existing validation frameworks (TNValidate, NHibernate Validator) for .NET or do we need to use a rules engine or what?? How do you people in the real world deal with this situation? :-)
There's only one validation framework that I know well and that is Enterprise Library Validation Application Block, or VAB for short. I will answer your questions from the context of the VAB.
First question: Can you do state (between-field) validation in VAB?
Yes you can. There are multiple ways to do this. You can choose for the self validation mechanism, as follows:
[HasSelfValidation]
public class Range
{
public int Min { get; set; }
public int Max { get; set; }
[SelfValidation]
public void ValidateRange(ValidationResults results)
{
if (this.Max < this.Min)
{
results.AddResult(
new ValidationResult("Max less than min", this, "", "", null));
}
}
}
I must say I personally don't like this type of validations, especially when validating my domain entities, because I like to keep my validations separate from the validation logic (and keep my domain logic free from references to any validation framework). However, they need considerably less code than the alternative, which is writing a custom validator class. Here's an example:
[ConfigurationElementType(typeof(CustomValidatorData))]
public sealed class RangeValidator : Validator
{
public RangeValidator(NameValueCollection attributes)
: base(string.Empty, string.Empty) { }
protected override string DefaultMessageTemplate
{
get { throw new NotImplementedException(); }
}
protected override void DoValidate(object objectToValidate,
object currentTarget, string key, ValidationResults results)
{
Range range = (Range)currentTarget;
if (range.Max < range.Min)
{
this.LogValidationResult(results,
"Max less than min", currentTarget, key);
}
}
}
After writing this class you can hook this class up in your validation configuration file like this:
<validation>
<type name="Range" defaultRuleset="Default" assemblyName="[Range Assembly]">
<ruleset name="Default">
<validator type="[Namespace].RangeValidator, [Validator Assembly]"
name="Range Validator" />
</ruleset>
</type>
</validation>
Second question: How to do complex validations with possible interaction a database (with VAB).
The examples I give for the first question are also usable for this. You can use the same techniques: self validation and custom validator. Your scenario where you want to check a value in a database is actually a simple one, because the validity of your object is not based on its context. You can simply check the state of the object against the database. It gets more complicated when the context in which an object lives gets important (but it is possible with VAB). Imagine for instance that you want to write a validation that ensures that every customer, at a given moment in time, has no more than two unshipped orders. This not only means that you have to check the database, but perhaps new orders that are added or orders are deleted within that same context. This problem is not VAB specific, you will have the same problems with every framework you choose. I've written an article that describes the complexities we're facing with in these situations (read and shiver).
Third question: How do you people in the real world deal with this situation?
I do these types of validation with the VAB in production code. It works great, but VAB is not very easy to learn. Still, I love what we can do with VAB, and it will only get better when v5.0 comes out. When you want to learn it, start with reading the ValidationHOL.pdf document that you can found in the Hands-On Labs download.
I hope this helps.
I build custom validation controls when I need anything that's not included out of the box. The nice thing here is that these custom validators are re-usable and they can act on multiple fields. Here's an example I posted to CodeProject of an AtLeastOneOf validator that lets you require that at least one field in a group has a value:
http://www.codeproject.com/KB/validation/AtLeastOneOfValidator.aspx
The code included in the download should work as an easy to follow sample of how you could go about it. The downside here is that Validation controls included with ASP.Net don't often work well with asp.net-ajax.
I'm part of a team tasked to revamping our old VB6 UI/COBOL database application to modern times. Before I was hired, the decision was made (largely on sales, I'm sure) to redo the UI before the database. So, now we're using WPF and MVVM to great effect, it's been amazing so far, especially using CSLA as our Model layer.
However, because our development is side-by-side with the next version of the old product, we're constrained a bit. We can't make any changes (or minimal changes) to the calls made to the COBOL database. This has been fine so far, albeit pining back to the glory days of SQL Server if you can believe it.
Where I've hit a particularly nasty roadblock regarding our BO design is in dealing with "light" business objects returned in lists and their "full" counterparts. Let me try and construct an example:
Let's say we have a person object in the DB with a bunch of fields. When we do a search on that table, we don't return all the fields, so we populate our lite object with these. These fields may or may not be a subset of the full person. We may have done a join or two to retrieve some other information specific to the search. But, if we want to edit our person object, we have to make another call to get the full version to populate the UI. This leaves us with two objects and attempting to juggle their state in 1 VM, all the while trying to keep the person list in sync on whatever parent object it sits after delete, edit, and add. Originally, I made our lite person object derive from ReadOnlyBase<>. But now that I'm dealing with the same list behavior you'd have with a list of full BOs except with half full, half lite, I'm thinking I should've just made both the lite and full versions derive from BusinessBase<> and simply made the lite version setter properties private.
Has anyone else out there come across and found a solution for this? After sleeping on it, I've come up with this potential solution. What if we wrap the full and lite version of our BO in another BO, like this:
public class PersonFull : BusinessBase<PersonFull>
{
...
}
public class PersonLite : BusinessBase<PersonLite>
{
...
}
public class Person : BusinessBase<Person>
{
public PersonFull PersonFull;
public PersonLite PersonLite;
}
public class PersonList : BusinessListBase<PersonList, Person>
{
}
Obviously everything would be CSLA registered properties and such, but for the sake of brevity they're fields there. In this case Person and PersonList would hold all the factory methods. After a search operation PersonList would be populated by Person objects whose PersonLite members were all populated and the PersonFull objects were all null. If we needed to get the full version, we simply tell the Person object to do so, and now we have our PersonFull object so we can populate the edit UI. If the Person object is to be deleted, we can easily do this with the CSLA delete procedures in place, while still maintaining the integrity of our lists across all the VMs that are listening to it.
So, I hope this made sense to everyone, and if anyone has a different solution they've successfully employed or criticism of this one, by all means!
Thanks!
(Reposted from: http://forums.lhotka.net/forums/thread/35576.aspx)
public class PersonLite : ReadOnlyBase<PersonLite>
{
public void Update(PersonFull person) { }
}
public class PersonFull : BusinessBase<PersonFull>
{
// blah blah
}
I would update the "lite" object with the changes made to the "full" object, and leave it as ReadOnlyBase. It's important to remember that the "ReadOnly" in ReadOnlyBase means an object that is only read from the database, and never saved to it. A less elegant, but more accurate name would be NotSavableBase, because such objects lack the DataPortal_XYZ machinery for anything but fetches. For obvious reasons, such objects usually have immutable properties, but they don't have to. ReadOnlyBase derives from Core.BindableBase and implements INotifyPropertyChanged, so changing the values of its properties will work just fine with binding.
When you save your "full" object, you pass the newly saved instance to the Update(PersonFull) method of the instance that sits in your list, and update the properties of the "lite" object from the "full" object.
I've used this technique many times and it works just fine.
If you look over Rocky's examples that come with the CSLA framework, you'll notice that he always separates the read only objects from the read/write objects. I think this is done for good reason, because the behaviors are going to be drastically different. Read only objects will be more performance based, their validation will be very different, and usually have less information altogether. The read/write objects will not be as perfomance based and rely heavily on validation, authorization, etc.
However, that leaves you with the dilemma you currently find yourself in. What I would do is overload the constructor of each class so you can pass them between each other and "copy" what you need out of each other.
Something like this:
public class PersonLite : BusinessBase<PersonLite>
{
public PersonLite(PersonFull fullPerson)
{
//copy from fullPerson's properties or whatever
}
}
public class PersonFull : BusinessBase<PersonFull>
{
public PersonFull(PersonLite litePerson)
{
//copy from litePerson's properties or whatever
}
}
You could do this with a factory pattern as well, which is Rocky's preference I believe.
I'm currently refactoring some code on a project that is wrapping up, and I ended up putting a lot of business logic into service classes rather than in the domain objects. At this point most of the domain objects are data containers only. I had decided to write most of the business logic in service objects, and refactor everything afterwards into better, more reuseable, and more readable shapes. That way I could decide what code should be placed into domain objects, and which code should be spun off into new objects of their own, and what code should be left in a service class. So I have some code:
public decimal CaculateBatchTotal(VendorApplicationBatch batch)
{
IList<VendorApplication> applications = AppRepo.GetByBatchId(batch.Id);
if (applications == null || applications.Count == 0)
throw new ArgumentException("There were no applications for this batch, that shouldn't be possible");
decimal total = 0m;
foreach (VendorApplication app in applications)
total += app.Amount;
return total;
}
This code seems like it would make a good addition to a domain object, because it's only input parameter is the domain object itself. Seems like a perfect candidate for some refactoring. But the only problem is that this object calls another object's repository. Which makes me want to leave it in the service class.
My questions are thus:
Where would you put this code?
Would you break this function up?
Where would someone who's following strict Domain-Driven design put it?
Why?
Thanks for your time.
Edit Note: Can't use an ORM on this one, so I can't use a lazy loading solution.
Edit Note2: I can't alter the constructor to take in parameters, because of how the would-be data layer instantiates the domain objects using reflection (not my idea).
Edit Note3: I don't believe that a batch object should be able to total just any list of applications, it seems like it should only be able to total applications that are in that particular batch. Otherwise, it makes more sense to me to leave the function in the service class.
You shouldn't even have access to the repositories from the domain object.
What you can do is either let the service give the domain object the appropriate info or have a delegate in the domain object which is set by a service or in the constructor.
public DomainObject(delegate getApplicationsByBatchID)
{
...
}
I'm no expert on DDD but I remember an article from the great Jeremy Miller that answered this very question for me. You would typically want logic related to your domain objects - inside those objects, but your service class would exec the methods that contain this logic. This helped me push domain specific logic into the entity classes, and keep my service classes less bulky (as I found myself putting to much logic inside the service classes like you mentioned)
Edit: Example
I use the enterprise library for simple validation, so in the entity class I will set an attribute like so:
[StringLengthValidator(1, 100)]
public string Username {
get { return mUsername; }
set { mUsername = value; }
}
The entity inherits from a base class that has the following "IsValid" method that will ensure each object meets the validation criteria
public bool IsValid()
{
mResults = new ValidationResults();
Validate(mResults);
return mResults.IsValid();
}
[SelfValidation()]
public virtual void Validate(ValidationResults results)
{
if (!object.ReferenceEquals(this.GetType(), typeof(BusinessBase<T>))) {
Validator validator = ValidationFactory.CreateValidator(this.GetType());
results.AddAllResults(validator.Validate(this));
}
//before we return the bool value, if we have any validation results map them into the
//broken rules property so the parent class can display them to the end user
if (!results.IsValid()) {
mBrokenRules = new List<BrokenRule>();
foreach (Microsoft.Practices.EnterpriseLibrary.Validation.ValidationResult result in results) {
mRule = new BrokenRule();
mRule.Message = result.Message;
mRule.PropertyName = result.Key.ToString();
mBrokenRules.Add(mRule);
}
}
}
Next we need to execute this "IsValid" method in the service class save method, like so:
public void SaveUser(User UserObject)
{
if (UserObject.IsValid()) {
mRepository.SaveUser(UserObject);
}
}
A more complex example might be a bank account. The deposit logic will live inside the account object, but the service class will call this method.
Why not pass in an IList<VendorApplication> as the parameter instead of a VendorApplicationBatch? The calling code for this presumably would come from a service which would have access to the AppRepo. That way your repository access will be up where it belongs while your domain function can remain blissfully ignorant of where that data came from.
As I understand it (not enough info to know if this is the right design) VendorApplicationBatch should contain a lazy loaded IList inside the domain object, and the logic should stay in the domain.
For Example (air code):
public class VendorApplicationBatch {
private IList<VendorApplication> Applications {get; set;};
public decimal CaculateBatchTotal()
{
if (Applications == null || Applications.Count == 0)
throw new ArgumentException("There were no applications for this batch, that shouldn't be possible");
decimal Total = 0m;
foreach (VendorApplication App in Applications)
Total += App.Amount;
return Total;
}
}
This is easily done with an ORM like NHibernate and I think it would be the best solution.
It seems to me that your CalculateTotal is a service for collections of VendorApplication's, and that returning the collection of VendorApplication's for a Batch fits naturally as a property of the Batch class. So some other service/controller/whatever would retrieve the appropriate collection of VendorApplication's from a batch and pass them to the VendorApplicationTotalCalculator service (or something similar). But that may break some DDD aggregate root service rules or some such thing I'm ignorant of (DDD novice).