How to handle and organize DTOs for different context? - c#

When using simple DTOs in various scenarios I have frequently run into the same kind of problem and I always wondered whether there's a better way to deal with it.
The thing is, I have a business object, e.g. Asset which has a bunch of properties, child objects and calculated fields, some of them expensive to calculate in sense of time, some of them huge in sense of data amonut. I need to use a different flavor of this object in various screens in the UI, e.g.
in a tree where there is a hierarchy displayed and I don't need much more than the display name
in a grid where I'm showing just a couple of properties
in a detail pane where there's a big subset of available information, but still some of it (like mapped objects) is shown only on demand
To be able to achieve optimal performance with this scenario, I have always created different DTOs for each context, only containing the subset of information which is actually used in that context. While being a resource-optimal solution, this leads to couple of problems :
I have a class explosion with huge number of DTO classes
I have quite a hard time coming up with different names for the same thing like AssetDtoForGridInTheOverviewScreenInTheUpperPaneAboveTheSplitter, not to mention maintaining them later
I am frequently repeating myself in the transformation methods, because there are properties that are used by most of the DTOs but not by all of them (therefore I can't put them into any superclass and reuse the transformation logic)
The technology I'm using is ASP.NET SOAP WebServices and C# 3.5, but I think somehow this could be a language-agnostic problem. Any ideas are welcome..

This is a known problem with DTOs. It's described in this otherwise mediocre articule on MSDN. To paraphrase: DTO is the most versatile n-tier data access pattern, but it also requires most work.
You can address some of your issues with mapping by using convention-based mapping, such as AutoMapper.
When it comes to class explosion, could it be that you are using too flat data structures?
This can be difficult to tell because DTOs naturally include a great deal of semantic repetition that turns out to not be logical repetition at all. For example, even if you have semantically similar types, if one is a ViewModel and the other is a Domain Object, they may share semantic structure, but have vastly different responsibilities.
If, on the other hand, you have a lot of repetition in the same application layer (e.g. UI), you may be violating the DRY principle. In this case, it may often help to encapsulate related data in what starts out as a flat data structure into a separate class. In most UI frameworks I'm aware of, you can still databind a flat display to a hierarchically structured class.

The problem of class explosion is inherent to the DTO approach, there probably isn't much you can do about that. Be careful not to mix your view-model with your DTO model. Your DTO's should only be used to get the data from your data tier to your front end and not for presentation.
With the advent of .NET 3.5 you can choose to implement some basic, more coarse grained DTO's and replace your ViewModel with an anonymous type which you can dynamically create off your DTO's. I found this to be avery flexible solution.
Regarding your naming conventions, it is probably useful to group your DTO's into scenarios and put them in a corresponding namespace. For example Solution.AssetManagement.Asset and Solution.AssetReporting.Asset

Related

Property name of output model API

Good morning, guys, I need some advice.
I’m creating the output templates for my APIs but I’m very confused about the name of the classes.
For example, I have an entity called User.
In the output model I must return a list of User but it must not center with the entity but with another model created by me for output.
Well, I don’t know what to name this last class I told you about.
I cannot call it User because it conflicts with the real entity.
Tips?
There should be at least 3 layers of objects in your code, ViewModel, Dto and Entities.
Each layer should only be able to see the layer directly below it.
So, your service layer, can read Entities from your data layer, but if it exposes any objects, they should be in a Dto.
Then, your presentation layer (UI/API etc), will read from the service layer (DTO), and expose it's objects as ViewModels.
In many cases, this means that all 3 objects (Entity, Dto & ViewModel) have the same repeated properties, but this is to be expected, especially in smaller or newer projects.
This should then solve your naming problems.
Data layer: XXXEntity
Service layer: XXXDto
Presentation layer: XXXViewModel
This explanation is very simplified, and you could solve this problem in many different ways (you could use namespaces instead of class suffixes for example).
The guidance I try to work by is that naming should reflect intent - whereas a User just represents the concept of a 'user' in your domain, an API model is intended to be an API payload containing user information. In your position, I would consider something like UserApiModel or UserPayload.
*As an added note, for my money, the thing that matters the most here is consistency - no matter what you pick, what makes sense to you now may not be the most intuitive thing to you (or anyone else) maintaining the code later. As long as you apply your naming convention consistently across all your API models, don't stress too much about finding the 'right' one - just pick the first one that seems good enough, and keep rolling.

Is ok to have Entity Framework inside Domain layer?

I have a project with the following structure:
Project.Domain
Contains all the domain objects
Project.EntityFramework, ref Project.Domain
Contains Entity Framework UnitOfWork
Project.Services, ref Project.Domain and Project.EntityFramework
Contains a list of Service classes that perform some operations on the Domain objects
Project.Web.Mvc, ref to all the projects above
I am trying to enforce some Business rules on top of the Domain objects:
For example, you cannot edit a domain object if it's parent is disabled, or, changing the name of an object, Category for example, needs to update recursively all it's children properties (avoiding / ignoring these rules will result in creating invalid objects)
In order to enforce these rules, i need hide all the public properties setters, making them as internal or private.
In order to do this, i need to move the Project.Services and Project.EntityFramework inside the Project.Domain project.
Is this wrong?
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
its really a bad idea, once i had this opinion but honestly if you dont program to abstraction it will become a pain when the project becomes larger. (a real pain)
IRepositories help you spread the job between different team members also. in addition to that you can write many helper extensions for Irepository to encapsulate Different Jobs for example
IReopisotry<File>.Upload()
you must be able to test each layer independently and tying them together will let you only do an integration tests with alot of bugs in lower layers :))
First, I think this question is really opinion based.
According to the Big Book the domain models must be separated from the data access. Your domain has nothing to with the manner of how storing the data. It can be a simple text file or a clustered mssql servers.
This choice must be decided based on the actual project. What is the size of the application?
The other huge question is: how many concurrent user use the db and how complex your business logic will be.
So if it's a complex project or presumably frequently modified or it has educational purposes then you should keep the domain and data access separated. And should define the repository interfaces in the domain model. Use some DI component (personally I like Ninject) and you should not reference the data access component in the services.
And of course you should create the test projects also using some moq tools to test the layers separately.
Yes this is wrong, if you are following Domain Driven Design, you should not compromise your architecture for the sake of doing less work. Your data-access and domain should be kept apart. I would strongly suggest that you implement the Repository pattern as it would allow you more flexibility in the long run.
There are of course to right answer to whats the right design...I would however argue that EF is you data layer abstraction, there is no way youre going to make anything thats more powerful and flexible with repositories.To avoid code repetition you can easily write extension methods (for IQueryable<>) for common tasks.Unit testing of the domain layer is easily handled by substituting you big DB with some in-proc DB (SqlLite / Sql Server Compact).IMHO with the maturity of current ORMs like nHibernate and EF is a huge waste of money and time to implement repositories for something as simple as DB access.
Blog post with a more detailed reply; http://ayende.com/blog/4784/architecting-in-the-pit-of-doom-the-evils-of-the-repository-abstraction-layer

Pulling out business logic from the data access layer

We are writing some support applications (rather small) to our ERP system.
Thus until now I am feeling that I am using the Data Access Layer for 2 roles: the business layer AND the data access one.
I am having trouble deciding what I have to move to a separate layer and if I need to. I have read somewhere that knowing when to make layer separation is wisdom and knowing the patterns is just knowledge. I have neither in adequate amounts.
So I need some help to determine what is what.
My current DAL deals with fetching the data and applying basic logic on them. For example there are methods like
GetProductAvailabilitybyItem
GetProductAvailabilitybyLot
etc.
If I needed to separate them what I would have to do?
One other matter that is in my head is that in order to normalize my DAL and make it return different entities every time (through one general get method) I would have to use DataTable as return type. Currently I am using things like List<PalletRecord> as return types.
I feel that my apps are so small that its hard (and maybe useless) to discriminate these 2 layers.
My basic need is to build something that can be consumed by multiple front-ends (web pages, WinForms, WPF, and so on).
Additional Example:
Lets talk some barcode. I need to check if a fetched lot record is valid or not. I am fetching the record in DAL and produce a method returning bool in business layer?
Then i can call the bool method from whatever presentation in order to check if a textbox contains a valid lot?
Is this the logic extremely simplified?
Based on your description, you should definitely separate both layers right now, when the application is still small. You might feel a BL is useless when you're just accessing and displaying data, but with time you'll find the need to modify, transform, or manipulate the data, like coordinate object creation from different tables, or update different tables in a single action from the user.
The example you provided helps to support this idea, although is quite simplified.
Pablo's answer does offer some good design ideas too: you should definitely use an ORM to simplify your DAL and keep it very thin. I've found NHibernate and Fluent make a very good job on this. You can use the BL to coordinate access using Data Access Objects.
Given that you are dealing with very small applications, why not just have an ORM provide all data-access for you and just worry about the business layer?
That way you don't have to worry about dealing with DataTable's, mapping data to objects and all that. Much faster development, and you would reduce the size of the codebase.
For example, NHibernate or Microsoft's Entity Framework
Now, if you will be providing data to external consumers (you are implementing a service), you may want to create a separate set of DTOs that go through the wire, instead of trying to send your actual model entities.
I am not a big fan of nTire architecture and have some good reasons for it.
Main objective for such an architecture are
Ability to work with different underlying database separation of
context - i.e. application design and business logic Uniformity and
confirmation of best patterns and practices.
However, while doing so, you also make some sacrifices such as give up provider specific optimizations etc.
My advise is, you can go with two layer architecture,i.e. Data access and business logic layer and GUI or presentation layer. It will still allow you to have a common code for different platforms and at the same time will save you from spaghetti code.

What are the 'big' advantages to have Poco with ORM?

One advantage that comes to my mind is, if you use Poco classes for Orm mapping, you can easily switch from one ORM to another, if both support Poco.
Having an ORM with no Poco support, e.g. mappings are done with attributes like the DataObjects.Net Orm, is not an issue for me, as also with Poco-supported Orms and theirs generated proxy entities, you have to be aware that entities are actually DAO objects bound to some context/session, e.g. serializing is a problem, etc..
POCO it's all about loose coupling and testability.
So when you are doing POCO you can test your Domain Model (if your're doing DDD for example) in isolation. You don't have to bother about how it is persisted. You don't need to stub contexts/sessions to test your domain.
Another advantage is that there is less leaky abstractions. Because persistance concerns are not pushed to domain layer. So you are enforcing the SRP principle.
The third advantage I can see is that doing POCO your Domain Model is more evolutive and flexible. You can add new features easier than if it was coupled to the persistance.
I use POCO when I'm doing DDD for example, but for some kind of application you don't need to do DDD (if you're doing small data based applications) so the concerns are not the same.
Hope this helps
None. Point. All advantages people like throwing around are advantages that are not important in the big scale of the picture. I rather prefer a strong base class for entity objects that actually holds a lot of integrated code (like throwing property change events when properties change) than writing all that stuff myself. Note that I DID write a (at that time commercially available) ORM for .NET before "LINQ" or "ObjectSpaces" even were existing. I've used O/R mappers like for 15 years now, and never found a case where POCO was really something that was worth the possible trouble.
That said, attributes MAY be bad for other reasons. I rather prefer the Fluent NHibernate approach these days - having started my own (now retired) mapper with attributes, then moved to XML based files.
The "POCO gets me nothing" theme mostly comes from the point that Entities ARE SIMPLY NOT NORMAL OBJECTS. They have a lot of additional functionality as well as limitations (like query speed etc.) that the user should please be aware of anyway. ORM's, despite LINQ, are not replacable anyway - noit if you start using their really interesting higher features. So, at the end you get POCO and still are suck with a base class and different semantics left and right.
I find that most proponents of POCO (as in: "must have", not "would be nice") normally have NOT thought their arguments to the real end. You get all kinds of pretty crappy thoughts, pretty much on the level of "stored procedures are faster than dynamic SQL" - stuff that simply does not hold true. Things like:
"I want to have them in cases where they do not need saving ot the database" (use a separate object pool, never commit),
"I may want to have my own functionality in a base class (the ORM should allos abstract entity classed without functionality, so put your OWN base class below the one of the ORM)
"I may want to replace the ORM with another one" (so never use any higher functionality, hope the ORM API is compatible and then you STILL may have to rewrite large parts).
In general POCO people also overlook the hugh amount of work that acutally is to make it RIGHT - with stuff like transactional object updates etc. there is a TON of code in the base class. Some of the .NET interfaces are horrific to implement on a POCO level, though a lot easier if you can tie into the ORM.
Take the post of Thomas Jaskula here:
POCO it's all about loose coupling and
testability.
That assumes you can test databinding without having it? Testability is mock framework stuff, and there are REALLY Powerful ones that can even "redirect" method calls.
So when you are doing POCO you can
test your Domain Model (if you're
doing DDD for example) in isolation.
You don't have to bother about how it
is persisted. You don't need to stub
contexts/sessions to test your domain.
Actually not true. Persistence should be part of any domain model test, as the domain model is there to be persisted. You can always test non-persistent scenarios by just not committing the changes, but a lot of the tests will involve persistence and the failure of that (i.e. invoices with invalid / missing data re not valid to be written to disc, for example).
Another advantage is that there is
less leaky abstractions. Because
persistance concerns are not pushed to
domain layer. So you are enforcing the
SRP principle.
Actually no. A proper Domain model will never have persistence methods in the entities. This is a crap ORM to start with (user.Save ()). OTOH the base class will to things like validation (IDataErrorInfo), handle property update events on persistent filed and in general save you a ton of time.
As I said before, some of the functionality you SHOULD have is really hard to implement with variables as data store - like the ability to put an entity into an update mode, do some changes, then roll them back. Not needed - tell that Microsoft who use that if available in their data grids (you can change some properties, then hit escape to roll back changes).
The third advantage I can see is that
doing POCO your Domain Model is more
evolutive and flexible. You can add
new features easier than if it was
coupled to the persistance.
Non-argument. You can not play around adding fields to a peristet class without handling the persistence, and you can add non-persistent features (methods) to a non-poco class the same as to a poco class.
In general, my non-POCO base class did the following:
Handle property updates and IDataErrorInfo - without the user writing a line of code for fields and items the ORM could handle.
Handle object status information (New, Updated etc.). This is IMHO intrinsic information that also is pretty often pushed down to the user interface. Note that this is not a "save" method, but simply an EntityStatus property.
And it contained a number of overridable methods that the entity could use to extend the behavior WITHOUT implementing a (public) interface - so the methods were really private to the entity. It also had some more internal properties like to get access to the "object manager" responsible for the entity, which also was the point to ask for other entities (submit queries), which sometimes was needed.
POCO support in an ORM is all about separation of concerns, following the Single Responsibility Principle. With POCO support, an ORM can talk directly to a domain model without the need to "muddy" the domain with data-access specific code. This ensures the domain model is designed to solve only domain-related problems and not data-access problems.
Aside from this, POCO support can make it easier to test the behaviour of objects in isolation, without the need for a database, mapping information, or even references to the ORM assemblies. The ability to have "stand-alone" objects can make development significantly easier, because the objects are simple to instantiate and easy to predict.
Additionally, because POCO objects are not tied to a data-source, you can treat them the same, regardless of whether they have been loaded from your primary database, an alternative database, a flat file, or any other process. Although this may not seem immediately beneficial, treating your objects the same regardless of source can make behaviour easy to predict and to work with.
I chose NHibernate for my most recent ORM because of the support for POCO objects, something it handles very well. It suits the Domain-Driven Design approach the project follows and has enabled great separation between the database and the domain.
Being able to switch ORM tools is not a real argument for POCO support. Although your classes may not have any direct dependencies on the ORM, their behaviour and shape will be restricted by the ORM tool and the database it is mapping to. Changing your ORM is as significant a change as changing your database provider. There will always be features in one ORM that are not available in another and your domain classes will reflect the availability or absence of features.
In NHibernate, you are required to mark all public or protected class members as virtual to enable support for lazy-loading. This restriction, though not significantly changing my domain layer, has had an impact on its design.

C# Business Object Architecture question regarding Business and DTO objects

Background
We have our own Business Object Architecture, a much lighter (...and loosely based on, but does'nt actually use...) version of the "CSLA" business object framework with similar usage, validation, inclusive DAL etc. All code is generated (stored procs and Business Objects are creaed using CodeSmith)
The Business Objects are quite rich with regards features to Get Objects, Lists with filter sort parameters to return Objects and Generic Lists.
This architecture may not subscribe to one particular or popular architecture and purism, but its works well for us and has reduced a lot of manual coding.
The one thing we find a lot, especially when integrating with other systems (3rd party, Flash or Silverlight etc) is the requirement for contextualised "Basic Objects" or Data Containers that can be easily serialized and supplied across webservices etc for a specific purpose.
Looking around SO and the web, the Term DTO comes up a lot. We have created these Basic objects within a Dto namespace, these objects are basic objects that represent basic or specific versions of Business Objects but have no features except Constructors that accept either a DataRow or Business Object to populate the "Dto" object.
Questions:
1) Is calling this a "DTO" object correct?
2) Rather than having constructors to provide the data and set the object properties, should this population code be in a different class, some kind of "Helper Class"
Any comments on terminology and naming conventions for what I am trying to do?
Thanks
1) Yes.
2) I see no big problem, allthough you kind of limit the use of the DTO. But again I see no big problem with it. There is a mapping framework you could use to do this for you which you can find here http://www.lostechies.com/blogs/jimmy%5Fbogard/archive/2009/01/22/automapper-the-object-object-mapper.aspx

Categories