Alternative for initializing properties in the constructor in (Dynamics) CRM - c#

I'm currently working one a custom CRM-style solution (EF/Winforms/OData WebApi) and I wonder how to implement a quite simple requirement:
Let's say there is a simple Project entity. It is possible to assign Tasks to it. There is a DefaultTaskResponsible defined in the Project. Whenever a Task is created, the Project's DefaultTaskResponsible is used as the Task.Responsible. But it is possible change the Task.Responsible and even set it to null.
So, in a 'normal' programming world, I would use a Task constructor accepting the Project and set the Responsible there:
public class Task {
public Task(Project p) {
this.Responsible = p.DefaultTaskResponsible;
...
}
}
But how should I implement something like this in a CRM-World with Lookup views? In Dynamics CRM (or in my custom solution), there is a Task view with a Project Lookup field. It does not make sense to use a custom Task constructor.
Maybe it is possible to use Business Rules in Dynamics CRM and update the Responsible whenever the Project changes (not sure)?! But how should I deal with the WebApi/OData Client?
If I receive a Post to the Task endpoint without a Responsible I would like to use the DefaultTaskResponsible, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)"
}.
No Responsible was send (maybe because it is an older client), so use the default one. But if a Responsible is set, the passed value should be used instead, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)",
"responsible#odata.bind": null
}.
In my TaskController I only see the Task model with the Responsible being null, but I don't know if it is null because it was set explicitly or because it wasn't send in the request.
Is there something wrong with my ideas/concepts? I think it is quite common to initialize properties based on other objects/properties, isn't it?

This question is probably out of scope for this forum, but it is a subject I am interested in. A few thoughts:
A "Task" is a generic construct which traditionally can be associated with many different types of entities. For example, you might not only have tasks associated with Projects, but also with Customer records and Sales records. To run with your code example it would look like:
public Task(Entity parent) {}
Then you have to decide whether or not your defaulting of the Responsible party is specific to Projects, or generic across all Entities which have Tasks. If the latter, then our concept looks like this:
public Task(ITaskEntity parent)
{
this.Responsible = parent.DefaultResponsible; //A property of ITaskEntity
}
This logic should be enforced at the database "pre operation" level, i.e. when your CRM application receives a request to create a Task, it should make this calculation, then persist the task to the database. This suggests that you should have a database execution pipeline, where actions can be taken before or after database operations occur. A standard simple execution pipeline looks like this:
Validation -> Pre Operation -> Operation (CRUD) -> Post Operation
Unless you are doing this for fun, I recommend abandoning the project and using an existing CRM system.

Related

Hangfire - Configure AutomaticRetry for specific RecurringJob at runtime

I'm using Hangfire v1.7.9 and I'm trying to configure a series of recurring background jobs within my MVC 5 application to automate the retrieval of external reference data into the application. I've tested this with one task and this works great, but I'd like administrators within the system to be able to configure the Attempts and DelayInSeconds attribute parameters associated with the method that is called in these background jobs.
The AutomaticRetryAttribute states that you have to use...
...a constant expression, typeof expression or an array creation expression of an attribute parameter type
... which from what I've read is typical of all Attributes. However, this means that I can't achieve my goal by setting a property value elsewhere and then referencing that in the class that contains the method I want to run.
Additionally, it doesn't look like there is any way to configure the automatic retry properties in the BackgroundJob.Enqueue or RecurringJob.AddOrUpdate methods. Lastly, I looked at whether you could utilise a specific retry count for each named Queue but alas the only properties about Hangfire queues you can set is their names in the BackgroundJobServerOptions class when the Hangfire server is initialised.
Have I exhausted every avenue here? The only other thing I can think of is to create my own implementation of the AutomaticRetryAttribute and set the values at compile time by using an int enum, though that in itself would create an issue in the sense that I would need to provide a defined list of each of the values that a user would need to select. Since I wanted the number of retries to be configurable from 5 minutes all the way up to 1440 minutes (24 hours), I really don't want a huge, lumbering enum : int with every available value. Has anyone ever encountered this issue or is this something I should submit as a request on the Hangfire GitHub?
I would take the approach of making a custom attribute that decorates AutomaticRetryAttribute:
public class MyCustomRetryAttribute : JobFilterAttribute, IElectStateFilter, IApplyStateFilter
{
public void OnStateElection(ElectStateContext context)
{
GetAutomaticRetryAttribute().OnStateElection(context);
}
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
GetAutomaticRetryAttribute().OnStateApplied(context, transaction);
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
GetAutomaticRetryAttribute().OnStateUnapplied(context, transaction);
}
private AutomaticRetryAttribute GetAutomaticRetryAttribute()
{
// Somehow instantiate AutomaticRetryAttribute with dynamically fetched/set `Attempts` value
return new AutomaticRetryAttribute { Attempts = /**/ };
}
}
Edit: To clarify, this method allows you to reuse AutomaticRetryAttribute's logic, without duplicating it. However, if you need to change more aspects on per-job basis, you may need to duplicate the logic inside your own attribute.
Also, you can use context.GetJobParameter<T> to store arbitrary data on per-job basis

Proper permission management when using CQRS

I use Command Query Separation in my system.
To describe the problem lets start with an example. Let's say we have a code as follows:
public class TenancyController : ControllerBase{
public async Task<ActionResult> CreateTenancy(CreateTenancyRto rto){
// 1. Run Blah1Command
// 2. Run Blah2Command
// 3. Run Bar1Query
// 4. Run Blah3Command
// 5. Run Bar2Query
// ...
// n. Run BlahNCommand
// n+1. Run BarNQuery
//example how to run a command in the system:
var command = new UploadTemplatePackageCommand
{
Comment = package.Comment,
Data = Request.Body,
TemplatePackageId = id
};
await _commandDispatcher.DispatchAsync(command);
return Ok();
}
}
The CreateTenancy has a very complex implementation and runs many different queries and commands.
Each command or query can be reused in other places of the system.
Each Command has a CommandHandler
Each Query has a QueryHandler
Example:
public class UploadTemplatePackageCommandHandler : PermissionedCommandHandler<UploadTemplatePackageCommand>
{
//ctor
protected override Task<IEnumerable<PermissionDemand>> GetPermissionDemandsAsync(UploadTemplatePackageCommand command) {
//return list of demands
}
protected override async Task HandleCommandAsync(UploadTemplatePackageCommand command)
{
//some business logic
}
}
Every time you try to run the command or query there is a permission check. The problem which appears in the CreateTenancy is when you run let's say 10 commands.
There can be a case when you have permissions to all of the first 9 commands but you are missing some permissions to run the last command. In such a situation you can make some complex modifications to the system running these 9 commands and at the end, you are not able to finish the whole transaction because you are not able to run the last command. In such a case, there is a need to make a complex rollback.
I believe that in the above example the permission check should be done only once at the very beginning of the whole transaction but I'm not sure what is the best way to achieve this.
My first idea is to create a command called let's say CreateTenancyCommand and in the HandleCommandAsync place the whole logic from CreateTenancy(CreateTenancyRto rto)
So it would look like:
public class CreateTenancyCommand : PermissionedCommandHandler<UploadTemplatePackageCommand>
{
//ctor
protected override Task<IEnumerable<PermissionDemand>> GetPermissionDemandsAsync(UploadTemplatePackageCommand command) {
//return list of demands
}
protected override async Task HandleCommandAsync(UploadTemplatePackageCommand command)
{
// 1. Run Blah1Command
// 2. Run Blah2Command
// 3. Run Bar1Query
// 4. Run Blah3Command
// 5. Run Bar2Query
// ...
// n. Run BlahNCommand
// n+1. Run BarNQuery
}
}
I'm not sure if it's a good approach to invoke a command inside a command handler of another command?
I think that each command handler should be independent.
Am I right that the permission check should happen only once?
If yes- how to do the permission check in the case when you want to run a command to modify the database and then return some data to the client?
In such a case, you would need to do 2 permission checks...
There can be a theoretical case when you modify the database running the command and then cannot run a query which only reads the database because you are missing some of the permissions. It can be very problematic for the developer to detect such a situation if the system is big and there are hundreds of
different permissions and even the good unit tests coverage can fail.
My second idea is to create some kind of wrapper or extra layer above the commands and queries and do the permission check there
but not sure how to implement it.
What is the proper way to do the permissions check in the described transaction CreateTenancy which is implemented in the action of the controller in the above example?
In a situation where you have some sort of process which requires multiple commands / service calls to carry out the process, then this is an ideal candidate for a DomainService.
A DomainService is by definition one which has some Domain Knowledge, and is used to facilitate a process which interacts with multiple Aggregates / services.
In this instance I would look to have your Controller Action call a CQRS Command/CommandHandler. That CommandHandler will take the domain service as a single dependency. The CommandHandler then has the single responsibility of calling the Domain Service method.
This then means your CreateTenancy process is contained in one place, The DomainService.
I typically have my CommandHandlers simply call into service methods. Therefore a DomainService can call into multiple services to perform it's function, rather than calling into multiple CommandHandlers. I treat the Command Handlers as a facade through which my Controllers can access the Domain.
When it comes to permissions, I typically first decide whether the users authorisation to carry out a process is a Domain issue. If so, I will typically create an Interface to describe the users permissions. And also, I will typically create an Interface for this specific to the Bounded Context I am working in. So in this case you may have something like:
public interface ITenancyUserPermissions
{
bool CanCreateTenancy(string userId);
}
I would then have the ITenancyUserPermission interface be a dependancy in my CommandValidator:
public class CommandValidator : AbstractValidator<Command>
{
private ITenancyUserPermissions _permissions;
public CommandValidator(ITenancyUserPermissions permissions)
{
_permissions = permissions;
RuleFor(r => r).Must(HavePermissionToCreateTenancy).WithMessage("You do not have permission to create a tenancy.");
}
public bool HavePermissionToCreateTenancy(Command command)
{
return _permissions.CanCreateTenancy(command.UserId);
}
}
You said that the permission to create a Tenancy is dependent on the permission to perform the other tasks / commands. Those other commands would have their own set of Permission Interfaces. And then ultimately within your application you would have an implementation of these interfaces such as:
public class UserPermissions : ITenancyUserPermissions, IBlah1Permissions, IBlah2Permissions
{
public bool CanCreateTenancy(string userId)
{
return CanBlah1 && CanBlah2;
}
public bool CanBlah1(string userID)
{
return _authService.Can("Blah1", userID);
}
public bool CanBlah2(string userID)
{
return _authService.Can("Blah2", userID);
}
}
In my case I use a ABAC system, with the policy stored and processed as a XACML file.
Using the above method may mean you have slightly more code, and several Permissions interfaces, but it does mean that any permissions you define are specific to the Bounded Context you are working within. I feel this is better than having a Domain Model wide IUserPermissions interface, which may define methods which of no relevance, and/or confusing in your Tenancy bounded context.
This means you can check user permissions in your QueryValidator or CommandValidator instances. And of course you can use the implementation of your IPermission interfaces at the UI level to control which buttons / functions etc are shown to the user.
There is no "The Proper Way", but I'd suggest that you could approach the solution from the following angle.
Usage of the word Controller in your names and returning Ok() lets me understand that you are handling an http request. But what is happening inside is a part of a business use case that has nothing to deal with http. So, you'd better get some Onion-ish and introduce a (business) application layer.
This way, your http controller would be responsible for: 1) Parsing create tenancy http request into a create tenancy business request - i.e. the request object model in terms of domain language void of any infrastructure terms. 2) Formatting business response into an http response including translating business errors into http errors.
So, what you get entering the application layer is a business create tenancy request. But it's not a command yet. I can't remember the source, but someone once said, that command should be internal to a domain. It cannot come from outside. You may consider a command to be a comprehensive object model necessary to make a decision whether to change an application's state. So, my suggestion is that in your business application layer you build a command not only from business request, but also from results of all these queries, including queries to necessary permission read models.
Next, you may have a separate decision-making business core of a system that takes a command (a value object) with all the comprehensive data, applies a pure decision-making function and returns a decision, also a value object (event or rejection), containing, again, all necessary data calculated from the command.
Then, when your business application layer gets back a decision, it can execute it, writing to event stores or repositories, logging, firing events and ultimately producing a business response to the controller.
In most cases you'll be ok with this single-step decision-making process. If it needs more than a single step - maybe it's a hint to reconsider the business flow, because it gets too complex for a single http request processing.
This way you'll get all the permissions before handling a command. So, your business core will be able to make a decision whether those permissions are sufficient to proceed. It also may make a decision-making logic much more testable and, therefore, reliable. Because it is the main part that should be tested in any calculation flow branch.
Keep in mind that this approach leans toward eventual consistency, which you have anyway in a distributed system. Though, if interacting with a single database, you may run an application-layer code in a single transaction. I suppose, though, that you deal with eventual consistency anyway.
Hope this helps.

DDD update via REST

I am new to DDD, and I am trying to figure out a way to update aggregate by using a PUT verb.
If all properties in the aggregate have private setters, then it's obvious I need to have set of functionality for every business requirement. For an example
supportTicket.Resolve();
It's clear for me that I can achieve this with an endpoint such as /api/tickets/5/resolve, but what if i want to provide a way to update whole ticket atomically?
As an example, user can make a PUT request to /api/tickets/5 with a following body
{"status" : "RESOLVED", "Title":"Some crazy title"}
Do I need to do something like this in the ApplicationSercvice
if(DTO.Status != null && dto.Status == "RESOLVED")
supportTicket.Resolve();
if(DTO.Title != null)
supportTicket.setNewTitle(DTO.title);
If that's the case and changing ticket title has some business logic to prevent changing it if the ticket is resolved, should I consider some kind of prioritizing when updating aggregate, or I am looking at this entirely wrong?
Domain Driven Design for RESTful Systems -- Jim Webber
what if i want to provide a way to update whole ticket atomically?
If you want to update the whole ticket atomically, ditch aggregates; aggregates are the wrong tool in your box if what you really want is a key value store with CRUD semantics.
Aggregates only make sense when their are business rules for the domain to enforce. Don't build a tractor when all you need is a shovel.
As an example, user can make a PUT request to /api/tickets/5
That's going to make a mess. In a CRUD implementation, replacing the current state of a resource by sending it a representation of a new state is appropriate. But that doesn't really fit for aggregates at all, because the state of the aggregate is not under the control of you, the client/publisher.
The more appropriate idiom is to publish a message onto a bus, which when handled by the domain will have the side effect of achieving the changes you want.
PUT /api/tickets/5/messages/{messageId}
NOW your application service looks at the message, and sends commands to the aggregate
if(DTO.Status != null && dto.Status == "RESOLVED")
supportTicket.Resolve();
if(DTO.Title != null)
supportTicket.setNewTitle(DTO.title);
This is OK, but in practice its much more common to make the message explicit about what is to be done.
{ "messageType" : "ResolveWithNewTitle"
, "status" : "RESOLVED"
, "Title":"Some crazy title"
}
or even...
[
{ "messageType" : "ChangeTitle"
, "Title" : "Some crazy title"
}
, { "messageType" : "ResolveTicket"
}
]
Basically, you want to give the app enough context that it can do real message validation.
let's say I had aggregates which encapsulated needed business logic, but besides that there is a new demand for atomic update functionality and I am trying to understand a best way to deal with this.
So the right way to deal with this is first to deal with it on the domain level -- sit down with your domain experts, make sure that everybody understands the requirement and how to express it in the ubiquitous language, etc.
Implement any new methods that you need in the aggregate root.
Once you have the use case correctly supported in the domain, then you can start worrying about your resources following the previous pattern - the resource just takes the incoming request, and invokes the appropriate commands.
Is changing the Title a requirement of Resolving a ticket? If not, they should not be the same action in DDD. You wouldn't want to not resolve the ticket if the new name was invalid, and you wouldn't want to not change the name if the ticket was not resolvable.
Make 2 calls to perform the 2 separate actions. This also allows for flexibility such as, the Title can be changed immediately, but perhaps "resolving" the ticket will kick off some complex and time consuming (asyncronous) work flow before the ticket is actually resolved. Perhaps it needs to have a manager sign off? You don't want the call to change "title" tied up in that mix.
If needs be, create something to orchestrate multiple commands as per #VoiceOfUnreason's comment.
Wherever possible, keep things separate, and code to use cases as opposed to minimizing interacitons with entities.
You're probably right. But it's probably wiser to encapsulate such logic inside the ticket it self, by making a "change()" method, receiving a changeCommandModel (or something like this), so you can define the business rules inside your domain object.
if (DTO.Status != null && dto.Status == "RESOLVED")
supportTicket.Resolve(DTO.title);
I will change the underlying method to take title as parameter, this clarify the resolve action. That second if and validation you want in the domain method. It's really preference, more importantly is the message and I agree with #VoiceOfUnreason second option.

Routing an object in C# without using switch statements

I am writing a piece of software in c# .net 4.0 and am running into a wall in making sure that the code-base is extensible, re-usable and flexible in a particular area.
We have data coming into it that needs to be broken down in discrete organizational units. These units will need to be changed, sorted, deleted, and added to as the company grows.
No matter how we slice the data structure we keep running into a boat-load of conditional statements (upwards of 100 or so to start) that we are trying to avoid, allowing us to modify the OUs easily.
We are hoping to find an object-oriented method that would allow us to route the object to different workflows based on properties of that object without having to add switch statements every time.
So, for example, let's say I have an object called "Order" come into the system. This object has 'orderItems' inside of it. Each of those different kinds of 'orderItems' would need to fire a different function in the code to be handled appropriately. Each 'orderItem' has a different workflow. The conditional looks basically like this -
if(order.orderitem == 'photo')
{do this}
else if(order.orderitem == 'canvas')
{do this}
edit: Trying to clarify.
I'm not sure your question is very well defined, you need a lot more specifics here - a sample piece of data, sample piece of code, what have you tried...
No matter how we slice the data structure we keep running into a boat-load of conditional statements (upwards of 100 or so to start) that we are trying to avoid
This usually means you're trying to encode data in your code - just add a data field (or a few).
Chances are your ifs are linked to each other, it's hard to come up with 100 independent ifs - that would imply you have 100 independent branches for 100 independent data conditions. I haven't encountered such a thing in my career that really would require hard-coding 100 ifs.
Worst case scenario you can make an additional data field contain a config file or even a script of your choice. Either case - your data is incomplete if you need 100 ifs
With the update you've put in your question here's one simple approach, kind of low tech. You can do better with dependency injection and some configuration but that can get excessive too, so be careful:
public class OrderHandler{
public static Dictionary<string,OrderHandler> Handlers = new Dictionary<string,OrderHandler>(){
{"photo", new PhotoHandler()},
{"canvas", new CanvasHandler()},
};
public virtual void Handle(Order order){
var handler = handlers[order.OrderType];
handler.Handle(order);
}
}
public class PhotoHandler: OrderHandler{...}
public class CanvasHandler: OrderHandler{...}
What you could do is called - "Message Based Routing" or "Message Content Based" Routing - depending on how you implement it.
In short, instead of using conditional statements in your business logic, you should implement organizational units to look for the messages they are interested in.
For example:
Say your organization has following departments - "Plant Products", "Paper Products", "Utilities". Say there is only one place where the orders come in - Ordering (module).
here is a sample incoming message.
Party:"ABC Cop"
Department: "Plant Product"
Qty: 50
Product: "Some plan"
Publish out a message with this information. In the module that processes orders for "Plant Products" configure it such that it listens to a message that has "Department = Plant Products". This way, you push the onus on the department modules instead of on the main ordering module.
You can do this using NServiceBus, BizTalk, or any other ESB you might already have.
This is how you do in BizTalk and this is how you can do in NServiceBus
Have you considered sub-typing OrderItem?
public class PhotoOrderItem : OrderItem {}
public class CanvasOrderItem : OrderItem {}
Another option would be to use the Strategy pattern. Add an extra property to your OrderItem class definition for the OrderProcessStrategy and use a PhotoOrderStrategy/CanvasOrderStrategy to contain all of the different logic.
public class OrderItem{
public IOrderItemStrategy Strategy;
}
public interface IOrderItemStrategy{
public void Checkout();
public Control CheckoutStub{get;}
public bool PreCheckoutValidate();
}
public class PhotoOrderStrategy : IOrderItemStrategy{}
public class CanvasOrderStrategy : IOrderItemStrategy{}
Taking the specific example:
You could have some Evaluator that takes an order and iterates each line item. Instead of processing if logic raise events that carry in their event arguments the photo, canvas details.
Have a collection of objects 'Initiators' that define: 1)an handler that can process Evaluator messages, 2)a simple bool that can be set to indicate if they know what to do with something in the message, and 3)an Action or Process method which can perform or initiate the workflow. Design an interface to abstract these.
Issue the messages. Visit each Initiator, ask it if it can process the lineItem if it can tell it to do so. The processing is kicked off by the 'initiators' and they can call other workflows etc.
Name the pieces outlined above whatever best suits your domain. This should offer some flexibility. Problems may arise depending on concurrent processing requirements and workflow dependencies between the Initiators.
In general, without knowing a lot more detail, size of the project, workflows, use cases etc it is hard to comment.

Suggestions on how to map from Domain (ORM) objects to Data Transfer Objects (DTO)

The current system that I am working on makes use of Castle Activerecord to provide ORM (Object Relational Mapping) between the Domain objects and the database. This is all well and good and at most times actually works well!
The problem comes about with Castle Activerecords support for asynchronous execution, well, more specifically the SessionScope that manages the session that objects belong to. Long story short, bad stuff happens!
We are therefore looking for a way to easily convert (think automagically) from the Domain objects (who know that a DB exists and care) to the DTO object (who know nothing about the DB and care not for sessions, mapping attributes or all thing ORM).
Does anyone have suggestions on doing this. For the start I am looking for a basic One to One mapping of object. Domain object Person will be mapped to say PersonDTO. I do not want to do this manually since it is a waste.
Obviously reflection comes to mind, but I am hoping with some of the better IT knowledge floating around this site that "cooler" will be suggested.
Oh, I am working in C#, the ORM objects as said before a mapped with Castle ActiveRecord.
Example code:
By #ajmastrean's request I have linked to an example that I have (badly) mocked together. The example has a capture form, capture form controller, domain objects, activerecord repository and an async helper. It is slightly big (3MB) because I included the ActiveRecored dll's needed to get it running. You will need to create a database called ActiveRecordAsync on your local machine or just change the .config file.
Basic details of example:
The Capture Form
The capture form has a reference to the contoller
private CompanyCaptureController MyController { get; set; }
On initialise of the form it calls MyController.Load()
private void InitForm ()
{
MyController = new CompanyCaptureController(this);
MyController.Load();
}
This will return back to a method called LoadComplete()
public void LoadCompleted (Company loadCompany)
{
_context.Post(delegate
{
CurrentItem = loadCompany;
bindingSource.DataSource = CurrentItem;
bindingSource.ResetCurrentItem();
//TOTO: This line will thow the exception since the session scope used to fetch loadCompany is now gone.
grdEmployees.DataSource = loadCompany.Employees;
}, null);
}
}
this is where the "bad stuff" occurs, since we are using the child list of Company that is set as Lazy load.
The Controller
The controller has a Load method that was called from the form, it then calls the Asyc helper to asynchronously call the LoadCompany method and then return to the Capture form's LoadComplete method.
public void Load ()
{
new AsyncListLoad<Company>().BeginLoad(LoadCompany, Form.LoadCompleted);
}
The LoadCompany() method simply makes use of the Repository to find a know company.
public Company LoadCompany()
{
return ActiveRecordRepository<Company>.Find(Setup.company.Identifier);
}
The rest of the example is rather generic, it has two domain classes which inherit from a base class, a setup file to instert some data and the repository to provide the ActiveRecordMediator abilities.
I solved a problem very similar to this where I copied the data out of a lot of older web service contracts into WCF data contracts. I created a number of methods that had signatures like this:
public static T ChangeType<S, T>(this S source) where T : class, new()
The first time this method (or any of the other overloads) executes for two types, it looks at the properties of each type, and decides which ones exist in both based on name and type. It takes this 'member intersection' and uses the DynamicMethod class to emil the IL to copy the source type to the target type, then it caches the resulting delegate in a threadsafe static dictionary.
Once the delegate is created, it's obscenely fast and I have provided other overloads to pass in a delegate to copy over properties that don't match the intersection criteria:
public static T ChangeType<S, T>(this S source, Action<S, T> additionalOperations) where T : class, new()
... so you could do this for your Person to PersonDTO example:
Person p = new Person( /* set whatever */);
PersonDTO = p.ChangeType<Person, PersonDTO>();
And any properties on both Person and PersonDTO (again, that have the same name and type) would be copied by a runtime emitted method and any subsequent calls would not have to be emitted, but would reuse the same emitted code for those types in that order (i.e. copying PersonDTO to Person would also incur a hit to emit the code).
It's too much code to post, but if you are interested I will make the effort to upload a sample to SkyDrive and post the link here.
Richard
use ValueInjecter, with it you can map anything to anything e.g.
object <-> object
object <-> Form/WebForm
DataReader -> object
and it has cool features like: flattening and unflattening
the download contains lots of samples
You should automapper that I've blogged about here:
http://januszstabik.blogspot.com/2010/04/automatically-map-your-heavyweight-orm.html#links
As long as the properties are named the same on both your objects automapper will handle it.
My apologies for not really putting the details in here, but a basic OO approach would be to make the DTO a member of the ActiveRecord class and have the ActiveRecord delegate the accessors and mutators to the DTO. You could use code generation or refactoring tools to build the DTO classes pretty quickly from the AcitveRecord classes.
Actually I got totally confussed now.
Because you are saying: "We are therefore looking for a way to easily convert (think automagically) from the Domain objects (who know that a DB exists and care) to the DTO object (who know nothing about the DB and care not for sessions, mapping attributes or all thing ORM)."
Domain objects know and care about DB? Isn't that the whole point of domain objects to contain business logic ONLY and be totally unaware of DB and ORM?....You HAVE to have these objects? You just need to FIX them if they contain all that stuff...that's why I am a bit confused how DTO's come into picture
Could you provide more details on what problems you're facing with lazy loading?

Categories