I have debated this for a while now and still have not come to a conclusion.
While most examples I see have the factories code in the application layer I tend to think it should be in the domain layer.
Reasons for this:
I sometimes have initial validation done in my factory where I want all creation of objects to go through.
I want this code to be used on all instantiates of my object.
Sometimes an operation requires parameter information which feels unnatural to pass to a constructor.
And a few more not as important reasons.
Are there reasons why this is a bad practice?
Does this break other patterns?
A factory in DDD is just an instance of the factory pattern and as such it should be used where it makes the most sense. Another principle to consider is the information expert pattern which essentially states that behavior should be assigned to classes closest to the information. Therefore, if you have some domain specific rules and logic you would like to enforce, place the factory in the domain layer - after all, the factory creates domain objects. Note however that you may have other types of factories in other layers.
From memory, Eric Evans' book has examples where object factories are very much part of the domain layer.
For me, it makes perfect sense to locate your factories here.
+1 for doing that. Accessibility would be a good reason, I would keep the creational code at least close to the domain model layer. Otherwise users of the domain model will get simply confused how to instantiate it specially when finding restricted access constructors. Actually one sound reason to separate it would be that you have different valid ways to create the same thing e.g. which is the case usually when employing the Abstract Factory.
If I had to separate it I would put it in e.g. a package (in the case of Java) at least the same level of the domain model and ship it always along with it e.g.
upper
--> domain
--> domain_factory
I prefer Factories in the Application Layer.
If you keep the Factories in the Domain Layer, they will not help you when you need complex types as parameters (C# code example):
Application Layer:
//this Factory resides in the Domain Layer and cannot reference anything else outside it
Person person = PersonAggregateFactory.CreateDeepAndLargeAggregate(
string name, string code, string streetName,...
and lots of other parameters...);
//these ones reside in Application Layer, thus can be much more simple and readable:
Person person = PersonAggregateFactory.CreateDeepAndLargeAggregate(CreatePersonCommand);
Person person = PersonAggregateFactory.CreateDeepAndLargeAggregate(PersonDTO);
Domain Layer:
public class Person : Entity<Person>
{
public Address Address {get;private set;}
public Account Account {get;private set;}
public Contact Contact {get;private set;}
public string Name {get;private set;}
public Person(string name, Address address,Account account, Contact contact)
{
//some validations & assigning values...
this.Address = address;
//and so on...
}
}
public class Address:Entity<Address>{
public string Code {get;private set;}
public string StreetName {get;private set;}
public int Number {get;private set;}
public string Complement {get;private set;}
public Address(string code, string streetName, int number, string complement?)
{
//some validations & assigning values...
code = code;
}
}
public class Account:Entity<Account>{
public int Number {get;private set;}
public Account(int number)
{
//some validations & assigning values...
this.Number = number;
}
}
//yout get the idea:
//public class Contact...
Also, there is no obligation on keeping Factories inside the Domain Layer (from Domain Driven Design Quickly):
Therefore, shift the responsibility for creating instances of complex
objects and Aggregates to a separate object, which may itself have
no responsibility in the domain model but is still part of the
domain design. Provide an interface that encapsulates all complex
assembly and that does not require the client to reference the
concrete classes of the objects being instantiated. Create entire
Aggregates as a unit, enforcing their invariants.
As I don't use Factories to load persisted objects into memory, they don't have to be accessible from other layers than Application's. Here's why (from Domain Driven Design Quickly):
Another observation is that Factories need to create new objects
from scratch, or they are required to reconstitute objects which
previously existed, but have been probably persisted to a
database. Bringing Entities back into memory from their resting
place in a database involves a completely different process than
creating a new one. One obvious difference is that the new
object does not need a new identity. The object already has one.
Violations of the invariants are treated differently. When a new
object is created from scratch, any violation of invariants ends
up in an exception. We can’t do that with objects recreated from
a database. The objects need to be repaired somehow, so they
can be functional, otherwise there is data loss.
If builders/factories only have dependencies on domain classes and primitives, place them in the domain layer, otherwise place them outside the domain layer.
CAREFUL with placing 'implementation' in the Domain Layer.
Your domain code doesn't have dependencies. So, you are in trouble if you need to have complex factories.
For example:
// DOMAIN LAYER
public interface IAggregateFactory<TAgg, in TInput>
{
Task<TAgg> CreateAsync(TInput input);
}
public class AvailabilityFactoryParameters
{
public string SomeInputParameter { get; set; }
public string ZipCode { get; set; }
}
// INFRASTRUCTURE/APPLICATION LAYER
public class AvailabilityFactory : IAggregateFactory<GamePredictorAggregate,
GamePredictorFactoryParameters>
{
private readonly HttpClient _httpClient;
public AvailabilityFactory(IHttpClientFactory factory)
{
_httpClient = factory.CreateClient("weatherApi");
}
public async Task<GamePredictorAggregate> CreateAsync(GamePredictorFactoryParameters input)
{
var weather = await _httpClient.GetFromJsonAsync<WeatherDto>($"/weather/{input.ZipCode}");
return new GamePredictorAggregate(weather.CurrentTemperature, input.SomeInputParameter);
}
}
public class WeatherDto
{
public double CurrentTemperature { get; set; }
}
As you can see, now you have a myriad of objects and dependencies available to enrich your factory experience.
So, when you use it in your Application Service, it is easy...
public class GamePredictionService : ApplicationService
{
private readonly IAggregateFactory<GamePredictorAggregate, GamePredictorFactoryParameters> _factory;
public GamePredictionService(IAggregateFactory<GamePredictorAggregate, GamePredictorFactoryParameters> factory)
{
_factory = factory;
}
public async Task CreateNewPredictor(string zipCode, int someOtherParamater)
{
var input = new GamePredictorFactoryParameters();
input.ZipCode = zipCode;
input.SomeInputParameter = someOtherParamater;
var aggregate = await _factory.CreateAsync(input);
// Do your biz operations
// Persist using repository
}
}
Now your application service doesn't need to worry about the internals, and your domain objects need to understand how the factory gives them 'birth.'
Summary: Having your implementation in the Domain layer makes only sense if your factory only needs primitive types and nothing else. In cases where you may need to gather data from external services or other application services' DTOs, you want to move the implementation outside.
The only 'drawback' is that you need to 'inject' the factory into your application service, but that's not a big deal.
I hope this answer helps to clarify 'where to place Factories.'
Related
Let's say I have an interface like this:
public interface IUser
{
int Id { get; }
string Name { get; }
List<IMonthlyBudget> MonthlyBudget { get; }
}
and then I have a model that implements this:
public class User : IUser
{
public int Id { get; set; }
public string Name { get; set; }
public List<IMonthlyBudget> MonthlyBudget { get; set; }
}
and here I have the IMonthlyBudget:
public interface IMonthlyBudget
{
int Id { get; }
float MonthlyMax { get; }
float CurrentSpending { get; }
float MonthlyIncome { get; }
}
Now I have my models. But the issue comes with using SQLite. SQLite can't understand what is the real implementation of IMonthlyBudget. I understand why, but I really don't want remove the interface and expose the real implementation to all the clients that use these models. In my project structure I have a Core project that has all the model interfaces, and the model implementation are in a data access project.
Is there something wrong with how I'm approaching this problem? I assume i'm not the first one to run into a issue like this. Isn't it completely normal practice to keep model interfaces (what repositories etc then use as their return types, parameters and stuff like that) and implement the actual concrete models in a data access project?
And can someone explain why I can't do this:
public class User : IUser
{
public int Id { get; set; }
public string Name { get; set; }
public List<MonthlyBudget> MonthlyBudget { get; set; }
}
MonthlyBudget implements IMonthlyBudget, shouldn't it be completely fine to use the concrete model as the type instead of the the interface when the concrete model actually implements the interface?
A few questions here, so I'll break it down into sections:
Use of Interfaces
It is definitely good practice to interface classes that perform operations. For example, you may have a data service (i.e. data access layer) interface that allows you to do operations to read and modify data in your persistent store. However, you may have several implementations of that data service. One implementation may save to the file system, another to a DBMS, another is a mock for unit testing, etc.
However, in many cases you do not need to interface your model classes. If you're using an anemic business object approach (as opposed to rich business objects), then model classes in general should just be containers for data, or Plain Old CLR Objects (POCO). Meaning these objects don't have any real functionality to speak of and they don't reference any special libraries or classes. The only "functionality" I would put in a POCO is one that is dependent only upon itself. For example, if you have a User object that has a FirstName and LastName property, you could create a read-only property called FullName that returns a concatenation of the two.
POCOs are agnostic as to how they are populated and therefore can be utilized in any implementation of your data service.
This should be your default direction when using an anemic business object approach, but there is at least one exception I can think of where you may want to interface your models. You may want to support for example a SQLite data service, and a Realm (NoSQL) data service. Realm objects happen to require your models to derive from RealmObject. So, if you wanted to switch your data access layer between SQLite and Realm then you would have to interface your models as you are doing. I'm just using Realm as an example, but this would also hold true if you wanted to utilize your models across other platforms, like creating an observable base class in a UWP app for example.
The key litmus test to determining whether you should create interfaces for your models is to ask yourself this question:
"Will I need to consume these models in various consumers and will those consumers require me to define a specific base class for my models to work properly in those consumers?"
If the answer to this is "yes", then you should make interfaces for your models. If the answer is "no", then creating model interfaces is extraneous work and you can forego it and let your data service implementations deal with the specifics of their underlying data stores.
SQLite Issue
Whether you continue to use model interfaces or not, you should still have a data access implementation for SQLite which knows that it's dealing with SQLite-specific models and then you can do all your CRUD operations directly on those specific implementations of your model. Then since you're referring to a specific model implementation, SQLite should work as usual.
Type Compatibility
To answer your final question the type system does not see this...
List<IMonthlyBudget> MonthlyBudget
as being type-compatible with this...
List<MonthlyBudget> MonthlyBudget
In our minds it seems like if I have a list of apples, then it should be type-compatible with a list of fruit. The compiler sees an apple as a type of fruit, but not a list of apples as a type of a list of fruit. So you can't cast between them like this...
List<IMonthlyBudget> myMonthlyBudget = (List<IMonthlyBudget>) new List<MonthlyBudget>();
but you CAN add a MonthlyBudget object to a list of IMonthlyBudget objects like this...
List<IMonthlyBudget> myMonthlyBudget = new List<IMonthlyBudget>();
myMonthlyBudget.Add(new MonthlyBudget());
Also you can use the LINQ .Cast() method if you want to cast an entire list at once.
The reason behind this has to do with type variance. There's a good article on it here that can shed some light as to why:
Covariance and Contravariance
I hope that helps! :-)
I'm learning about Domain-Driven-Design and I'm a little confused about entities and injecting domain services into them. I have found this blog and conclusion is that injecting services into entities is a bad idea. I partially agree with that, but what to do in this case:
I have User entity which is an aggregate root, which has Password value object in it. It look like this:
Password value object:
public class Password
{
public string Hash { get; private set; }
public string Salt { get; private set; }
private readonly IHashGeneratorService _hashGeneratorService;
public Password(IHashGeneratorService hashGeneratorService)
{
_hashGeneratorService = hashGeneratorService;
}
public void GenerateHash(string inputString)
{
//Some logic
Salt = hashGeneratorService.GenerateSalt();
Hash = hashGeneratorService.GenerateHash(inputString);
}
}
User entity:
public class User
{
public Password Password { get; private set; }
public User(IHashGeneratorService hashGeneratorService)
{
this.Password = new Password(hashGeneratorService);
}
}
In this case if I create User entity via factory I need to provide IHashGeneratorService implementation to factory constructor or Create() method. After that if my factory is used by, for ex. SomeUserService I have to provide implementation (ex. via ctor injection) to it. And so on...
Honestly it smells form me, as lot of my classess are dependent on hash generator service implementation but only Password class uses it. And it also breaks SRP principle for Password class as I assume.
I've found out some solutions:
Use service locator. But it also smells as it's an anti-pattern and it is hard to test and manage entities if we use it.
Implement hashing alghoritm directly inside Password methods.
Stay with that what I have :) Cons are mentioned above, pros are that my classess are easier to test as I can provide mocked service instead of full implementation.
Personally I tend to refoactor my code to second solution as it does not break SRP (or it does? :)), classess are not dependent of hashing service implementation. Something more?
Or do you have another solutions?
I am quite new to DDD, however I belive that hashing passwords is not a concern of the domain, but a technical concern, just like persistence. The hash service should have it's interface defined in the domain, but it's implementation in the infrastructure layer. The application service would then use the hash service to hash the password and create a Password instance (which should be a value object) before passing it to the User aggregate root.
There might be cases where an aggregate has to use a service like when the dependency resolutions are very complex and domain-specific. In this case, the application service could pass a domain service into the aggregate method. The aggregate would then double-dispatch to the service to resolve references.
For more information you can read the Implementing Domain-Driven Design book written by Vaughn Vernon. He speaks about this around page 362 (Model Navigation), but also at a few other places in the book.
I don't know reasons, why do you consider injection of constructor parameters only. AFAIK, it's a common feature for DI-containers to inject properties or fields. E.g., using MEF you could write something like this:
class SomeUserService : ISomeUserService
{
[Import]
private IHashGeneratorService hashGeneratorService { get; set; }
// ...
}
and inject a dependency only in those types, where you really need it.
I am writing a "Domain Object" --> "Assembler" --> "Data Transfer Object" (DTO) pattern into my shared library to allow the presentation tier and the service layer to communicate through DTO's. I have avoided any shared interfaces to allow course grained aggregation in the DTO. I have a solid grasp in the "CreateDTO" methods, but am wondering how one implements UpdateDomainObject(DTO dto) methods in the Assembler in C#. I am considering the following structure to simplify my code:
public class SomeAssembler
{
public static SomeDTO CreateDTO(SomeDomainObject obj)
{
dto.Property1 = obj.Property1;
...
}
//METHOD IN QUESTION:
public static void UpdateDomainObject(SomeDTO dto, SomeDomainObject obj)
{obj.Property1 = dto.Property1 ...}
}
The reason I include a Domain Object parameter in the method is to allow code like below:
//Presentation Layer Code (PL signifies presentation layer type)
ContactPL : IContact //(IContact is an Entity library contract shared across layers in separate DLL)
{
#region Properties
//Note Mixed Types, so params not an option
public int Id {get;set}
public String FirstName {get;set;}
public string LastName {get;set;}
public string PhoneNumber {get;set;}
public Address address {get;set;}
#endregion
#region Methods
//METHOD IN QUESTION:
public void GetContact(int id)
{
ContactDTO dto = ContactService.GetContactbyId(id);
ContactAssembler.UpdateContact(dot, this);
}
// Other Methods...
#endregion
}
Please note the methods marked above as "//METHOD IN QUESTION:". My question, "Is this structure acceptable and / or are there any concerns with writing the code this way?"
I know Java deals with these issues as follows. I want to make sure that My Assembler method for "UpdateDomainObject" and its inclusion in the Presentation Layer model object would be acceptable or ideal. If not, any ideas for better ways to skin the cat?
Java Example - Assembler Method for Update from DTO (only for comparison - Answer in C# terms please):
public static void updateCustomer(CustomerDTO dto) {
Customer target = null;
for(Customer c: Domain.customers) {
if (dto.name.equals(c.getName())) {
target = c;
break;
}
}
if (target != null) {
target.setAddress(dto.address);
target.setPhone(dto.phone);
}
}
By including both the DTO and Domain Object in the 'Update' method, you are forcing the presentation layer to deal with both DTO's and Domain Objects.
It would be simpler to have the presentation layer deal with only the kind of objects that are relevant to it, whether that be DTO's, Domain Objects or a separate presentation layer specific class of objects, and let the assembler take care of the translations.
Your Java example is a good example of isolating this kind of translation to the assembler.
The benefits of this kind of isolation is a reduction in coupling in your code, which in turn makes your code more flexible (I.e. easier to change) as well as reduction in complexity which I'm turn makes your code easier to maintain.
I am newbie to SOA though I have some experience in OOAD.
One of the guidelines for SOA design is “Use Abstract Classes for Modeling only. Omit them from Design”. The use of abstraction can be helpful in modeling (analysis phase).
During analysis phase I have come up with a BankAccount base class. The specialized classes derived from it are “FixedAccount” and “SavingsAccount”. I need to create a service that will return all accounts (list of accounts) for a user. What should be the structure of service(s) to meet the requirement?
Note: It would be great if you can provide code demonstration using WCF.
It sounds like you are trying to use SOA to remotely access your object model. You would be better of looking at the interactions and capabilities you want your service to expose and avoid exposing inheritance details of your services implementation.
So in this instance where you need a list of user accounts your interface would look something like
[ServiceContract]
interface ISomeService
{
[OperationContract]
Collection<AccountSummary> ListAccountsForUser(
User user /*This information could be out of band in a claim*/);
}
[DataContract]
class AccountSummary
{
[DataMember]
public string AccountNumber {get;set;}
[DataMember]
public string AccountType {get;set;}
//Other account summary information
}
if you do decide to go down the inheritance route, you can use the KnownType attribute, but be aware that this will add some type information into the message being sent across the wire which may limit your interoperability in some cases.
Update:
I was a bit limited for time earlier when I answered, so I'll try and elaborate on why I prefer this style.
I would not advise exposing your OOAD via DTOs in a seperate layer this usually leads to a bloated interface where you pass around a lot of data that isn't used and religously map it into and out of what is essentially a copy of your domain model with all the logic deleted, and I just don't see the value. I usually design my service layer around the operations that it exposes and I use DTOs for the definition of the service interactions.
Using DTOs based on exposed operations and not on the domain model helps keep the service encapsulation and reduces coupling to the domain model. By not exposing my domain model, I don't have to make any compromises on field visibility or inheritance for the sake of serialization.
for example if I was exposing a Transfer method from one account to another the service interface would look something like this:
[ServiceContract]
interface ISomeService
{
[OperationContract]
TransferResult Transfer(TransferRequest request);
}
[DataContract]
class TransferRequest
{
[DataMember]
public string FromAccountNumber {get;set;}
[DataMember]
public string ToAccountNumber {get;set;}
[DataMember]
public Money Amount {get;set;}
}
class SomeService : ISomeService
{
TransferResult Transfer(TransferRequest request)
{
//Check parameters...omitted for clarity
var from = repository.Load<Account>(request.FromAccountNumber);
//Assert that the caller is authorised to request transfer on this account
var to = repository.Load<Account>(request.ToAccountNumber);
from.Transfer(to, request.Amount);
//Build an appropriate response (or fault)
}
}
now from this interface it is very clear to the conusmer what the required data to call this operation is. If I implemented this as
[ServiceContract]
interface ISomeService
{
[OperationContract]
TransferResult Transfer(AccountDto from, AccountDto to, MoneyDto dto);
}
and AccountDto is a copy of the fields in account, as a consumer, which fields should I populate? All of them? If a new property is added to support a new operation, all users of all operations can now see this property. WCF allows me to mark this property as non mandatory so that I don't break all of my other clients, but if it is mandatory to the new operation the client will only find out when they call the operation.
Worse, as the service implementer, what happens if they have provided me with a current balance? should I trust it?
The general rule here is to ask who owns the data, the client or the service? If the client owns it, then it can pass it to the service and after doing some basic checks, the service can use it. If the service owns it, the client should only pass enough information for the service to retrieve what it needs. This allows the service to maintain the consistency of the data that it owns.
In this example, the service owns the account information and the key to locate it is an account number. While the service may validate the amount (is positive, supported currency etc.) this is owned by the client and therefore we expect all fields on the DTO to be populated.
In summary, I have seen it done all 3 ways, but designing DTOs around specific operations has been by far the most successful both from service and consumer implementations. It allows operations to evolve independently and is very explicit about what is expected by the service and what will be returned to the client.
I would go pretty much with what others have said here, but probably needs to add these:
Most SOA systems use Web Services for communication. Web Services expose their interface via WSDL. WSDL does not have any understanding of inheritance.
All behaviour in your DTOs will be lost when they cross the wire
All private/protected fields will be lost when they cross the wire
Imagine this scenario (case is silly but illustrative):
public abstract class BankAccount
{
private DateTime _creationDate = DateTime.Now;
public DateTime CreationDate
{
get { return _creationDate; }
set { _creationDate = value; }
}
public virtual string CreationDateUniversal
{
get { return _creationDate.ToUniversalTime().ToString(); }
}
}
public class SavingAccount : BankAccount
{
public override string CreationDateUniversal
{
get
{
return base.CreationDateUniversal + " UTC";
}
}
}
And now you have used "Add Service Reference" or "Add Web Reference" on your client (and not re-use of the assemblies) to access the the saving account.
SavingAccount account = serviceProxy.GetSavingAccountById(id);
account.CreationDate = DateTime.Now;
var creationDateUniversal = account.CreationDateUniversal; // out of sync!!
What is going to happen is the changes to the CreationDate will not be reciprocated to the CreationDateUniversal since there is no implementation crossed the wire, only the value of CreationDateUniversal at the time of serialization at the server.
In order to separate concerns, on my current project, I've decided to completely separate my DAL and BLL/Business objects in separate assemblies. I would like to keep my business objects as simple structures without any logic to keep things extremely simple. I would like if I could keep my Business Logic separate from my DAL also. So my application will tell my DAL to load my objects, my DAL will run off to the database and get the data, populate the object with the data and then pass it back to my BLL.
Question - how can I have my DAL in a separate assembly and push data into the read only fields?
If I set the getter as protected then inherited objects can access it which isn't really what I want as I'd be returning the inherited object types, not the original object types.
If I set the getter as internal, then my DAL must reside in the same assembly as my BLL which I don't want.
If I set the getter as public, then anyone can read/write to it when it should be read only.
Edit: I note that I can have a return type of ObjectBase but actually be returning an object or collection of objects that are derived form ObjectBase so to the outside world (outside my DAL) the properties would be read-only, but my derived types (only accessible inside my DAL) the properties are actually read/write.
You can set the read only property via a constructor.
This is a situation without a silver-bullet; the simplest options are limited or don't meet your requirements and the thorough solutions either begin to have smells or begin to veer away from simplicity.
Perhaps the simplest option is one that I haven't seen mentioned here: keeping the fields / properties private and passing them as out / ByRef parameters to the DAL. While it wouldn't work for large numbers of fields it would be simple for a small number.
(I haven't tested it, but I think it's worth exploring).
public class MyObject()
{
private int _Id;
public int Id { get { return _Id; } } // Read-only
public string Name { get; set; }
// This method is essentially a more descriptive constructor, using the repository pattern for seperation of Domain and Persistance
public static MyObject GetObjectFromRepo(IRepository repo)
{
MyObject result = new MyObject();
return repo.BuildObject(result, out _Id);
}
}
public class MyRepo : IRepository
{
public MyObject BuildObject(MyObject objectShell, out int id)
{
string objectName;
int objectId;
// Retrieve the Name and Value properties
objectName = "Name from Database";
objectId = 42;
//
objectShell.Name = objectName;
Console.WriteLine(objectShell.Id); // <-- 0, as it hasn't been set yet
id = objectId; // Setting this out parameter indirectly updates the value in the resulting object
Console.WriteLine(objectShell.Id); // <-- Should now be 42
}
}
It's also worth noting that trying to keep your domain / business objects to the bare-minimum can involve more than you think. If you intend to databind to them then you'll need to implement IPropertyNotifyChanged, which prevents you from using automatically-implemented properties. You should be able to keep it fairly clean, but you will have to make some sacrifices for basic functionality.
This keeps your SoC model nicely, it doesn't add in too much complexity, it prevents writing to read-only fields and you could use a very similar model for serialization concerns. Your read-only fields can still be written to by your DAL, as could your serializer if used in a similar fashion - it means that conscious effort must be taken by a developer to write to a read-only field which prevents unintentional misuse.
Model Project
namespace Model
{
public class DataObject
{
public int id { get; protected set; }
public string name { get; set; }
}
}
Data Project
namespace Data
{
class DALDataObject : DataObject
{
public DALDataObject(int id, string name)
{
this.id = id;
this.name = name;
}
}
public class Connector
{
public static DataObject LoadDataObject(int objectId)
{
return new DALDataObject(objectId, string.Format("Dummy object {0}", objectId));
}
public static IEnumerable<DataObject> LoadDataObjects(int startRange, int endRange)
{
var list = new List<DataObject>();
for (var i = startRange; i < endRange; i++)
list.Add(new DALDataObject(i, string.Format("Dummy object {0}", i)));
return list;
}
}
}
How about just live with it?
Implement with those guidelines, but don't add such a hard constraint in your model. Lets say you do so, but then come another req where you need to serialize it or do something else, and then you are tied with it.
As you said in other comment, you want pieces that are interchangeable ... so, basically you don't want something that's tied into specific relations.
Update 1: Perhaps "just live with it" was too simplistic, but I still have to stress out that you shouldn't go too deep into these things. Using simple guidelines, keeping your code clean and SOLID its the best you can do at the beginning. It won't get in the way of progress while refactoring when everything is more settled isn't hard.
Make no mistake, I am not at all a person that goes writing code without any thinking on it. But, I have gone with such approaches and only in a handful cases they pay off --- without any indication that you wouldn't have a similar result by going simple and evolving it.
IMHO this one does not fit into important architecture concerns that need to be addressed at the very beginning.
Pre-emptive follow up: beware if you can't trust your team into following simple guidelines. Also make sure to begin with some structure, pick a couple scenarios that set a structure in with real stuff, the team will know their way much better when there is something simple there.
In my opinion, the best way to handle this is to have the business objects and the DAL in the same assembly separated by namespace. This separates the concerns logically and allows you to use internal setters. I can't think of any benefit to separating them into their own assemblies because one is useless without the other.