I'm trying to map DTO's to Entities. When I searched it online I noticed lots of references to AutoMapper and just as much feedback about how it is not a good way to do this.
Also I couldn't find any newly dated sources, one question complaining about how there are no "new" sources is 4 years old.
One of the sources I found, which looked really promising was this
https://rogerjohansson.blog/2013/12/01/why-mapping-dtos-to-entities-using-automapper-and-entityframework-is-horrible/
and I couldnt get it working either.
So, basically situation is like this.
I'm trying to do an integration about orders by using wcf. (A whole another case)
I have an order dto and related dto's are orderline, customer, customeraddress, orderadress. Some more will follow later.
Since these are essentially database tables, main "table" is Order. It acts as the header, orderline and others are self explanatory. I'm sure everyone came across something like this before.
I created Dto's according to their counterpart entities.
What I'm told to do is;
a) Convert (or as terminology goes, map?) these DTO's to Entities
b) Add the entity to dbcontext and savechanges.
So, can anyone point me in a good direction on solving this situation?
We have a similar project as you. Instead of WCF we use Model classes from MVC, but finally is the same idea: to convert from one object to another. I cannot disagree more about AutoMapper. At first, we had the same doubts about its efficiency, but finally we decided to give it a try. Then, we faced some of the problems the article pointed (especially the collections of elements). Luckily, AutoMapper gives you enough flexibility to handle those special mapping conditions.
For collections we use custom mappings, which allow us to detect when we have new elements / elements to update / elements to remove
For references, we follow the rules of Entity Framework: add the FK_Id value rather than the real object.
If, for some reason, you need to add some logic on the mapping, based on some reference entities, then we use dependencyResolver (only on extreme cases, as we don't like the idea of dependencyResolver)
I think AutoMapper is easy enough to learn the basics, so you can map your objects if a matter of minutes. Plus, it gives you all the tools for the special considerations.
The article you posted explains how "Entity Framework does not like AutoMapper", but its more related on how you follow the rules of EF and AutoMapper. Entity Framework is a huge ORM and, as such, you need to follow some rules (very strict rules in some cases). Of course, using AutoMapper with the basic examples will break some rules, but once you start to get used to it, is really easy to follow the rules.
To sum up: AutoMapper saves you a lot of time, that you can invest on customize some configurations. If not, you will have to use linq projections, which in most cases will take you much more time. For example: the collection problem is solved by detecting the add/edit/delete based on Ids, which can also be handled with AutoMapper through custom mappers.
Related
This question is to verify if the current implementation is the right way to go about in terms of best practices and performance. So far in all my previous companies I have been using Auto Mapper to map relational objects to domain model entities and domain model entities to Dtos. The ORM tools have been Entity framework.
In my current company they are using Dapper as ORM tool and do not use AutoMapper as they say Dapper does the mapping for you internally. So the way they have structured the project is create a separate class library project that contains Dtos and reference the Dtos in Dataccess and Business layer.
The query returned by Dapper is internally mapped to the Dtos. These Dtos are returned to the Business layer and so on.
For example
In the code below the Participant function is Dto.
Repository file in DataAccess layer
public List<ParticipantFunction> GetParticipantFunctions(int workflowId)
{
// Update the Action for Participant
string selectSql = #"SELECT [WFPatFunc_ID] AS WFPatFuncID
,[WFFunction]
,[SubIndustryID]
,[DepartmentID]
FROM [dbo].[WF_ParticipantsFunctions]
WHERE [DepartmentID] = (SELECT TOP 1 [DepartmentID] FROM [dbo].[WF] WHERE [WF_ID] = #workflowId)";
return _unitOfWork.GetConnection().Query<ParticipantFunction>(selectSql, new
{
workflowId = workflowId
}).ToList();
}
The reason what I have been told by the developers is that AutoMapper would be just an overhead and reduce speed and since Dapper does mapping internally, there is no need for it.
I would like to know if the practice they are following is fine and have no issues.
There is no right or wrong here. If the current system works and solves all their requirements, then great: use that! If you have an actual need for something where auto-mapper would be useful, then great: use that!
But: if you don't have a need for the thing that auto-mapper does (and it appears that they do not), then... don't use that?
Perhaps one key point / question is: what is your ability to refactor the code if your requirements change later. For many people, the answer there is "sure, we can change stuff" - so in that case I would say: defer on adding an additional layer until you actually have a requirement for an additional layer.
If you will absolutely not be able to change the code later, perhaps due to lots of public-facing APIs (software as a product), then it makes sense to de-couple everything now so there is no coupling / dependency in the public API. But: most people don't have that. Besides which, dapper makes no demands on your type model whatseover other than: it must look kinda like the tables. If it does that, then again: why add an additional layer if you don't need it?
This is more of an architecture problem and there is no good or bad.
Pros of DTOs:
Layering - You are not directly using data object so you can use Attributes for mapping and stuff not needed in your UI. That way you can have the DTOs in a library that has no dependencies to your data access stuff.(Note you could do this with fluent mapping but this way you can use what you like)
Modification - If you domain model changes your public interface will stay the same. Lets say you add a property to the model all the stuff you already build wont be getting the new field in you JSON for no reason.
Security - This is why Microsoft started pushing DTOs if I remember correctly I think CodePlex(Not 100% sure it was them) was using EF entitles directly to add stuff to the database. Someone figure this out and just expanded the JSON with stuff that he was not allowed to access, for example if you have a post that has a reference to a user you could change the role of the user by adding a new post because of change tracking. There are ways to protect you self from this but security should always be an opt-out not opt-in.
I like to use DTOs when I need to expose BI level to a public interface. For example if I have an API that has System operations like api/Users/AllUsers or api/Users/GetFilteredUsers.
No DTOs
Performance - Generally not using DTOs will run faster. No extra step of mapping. Projections help with this but you can really optimize when you know what you need to do.
Speed of development and smaller code base - Sometime a large architecture is an overkill and you just want to get things done. And you are not just doing copy paste of your properties for most of you time.
More flexibility - This is opposite to the security sometimes you want to use the same api to do more then one thing. For example if you want to have the UI decide what it wants to see from a big object. Like select and expand. (Note this can be done with DTOs but if you ever tried to do expand with DTOs you know how tricky it can get)
I use it when I need to expose the data access level to the client, if you need to use Breeze or JayData and/or Odata. With api like api/Users and api/Users?filter=(a=>a.Value > 5)
While we mostly use fluent configuration for our code-first POCOs, we have found it useful to use data annotations for things like the table name, PKs, etc. since it makes it easier for non-EF components that don't have a reference to the ObjectContext to interact with these entities.
In our experience, it seems that the two configuration styles can be mixed freely, with fluent configuration overriding DataAnnotations. Is this documented anywhere? Is there any risk to doing this mixed configuration?
We are currently using EF 4.3.1
You can use Data Annotation attributes and Fluent API at the same time. Entity Framework gives precedence to Fluent API over Data Annotations attributes.
I personally haven't ran into any issues with mixing the code first fluent api and data annotations. I also wondered if there would be any crossover pain and I can honestly say I have yet to find any. Here's a few references to case studies on the subject to ease your mind.
(Direct from the EF team)
http://msdn.microsoft.com/en-us/data/jj591583.aspx
(Part 1)
http://www.codeproject.com/Articles/476966/FluentplusAPIplusvsplusDataplusAnnotations-plusWor
I don't think it's a risk - as both things have equivalent counterparts for the most of it.
But, personally, when I run into some sort of issues around structuring my entities - first thing I do is to remove annotations if any - and move all to fluent.
Which over time led me to use pretty much straight fluent configuration (also freeing my my objects of any ties with the Db 'state of mind')...
IMO it is 'safer' but only in a way that you can do more and control
things exactly as you'd want them. Also helps with keeping things
consistent and in one place.
In trying to separate my domain layers and GUI and looking into all the different ways to do that, one thing that I keep asking is why is this so difficult? Why all the extra code for data obejcts and then all the extra mapping of properties copying values in and out etc. Shouldn't theere be an easier way?
Then I remeembered when i used to wite small littler db app using MS Access and, Access has the concept of a Dynaset, basically a Dynaset is a View, just like an SQL Server View, except it is an updateable view. So, a MS Access form would be based of the View/Dynaset and therefore would not have to know the details of all the individual tables involved. Sounds like the Data objects pattern to me. Now, since Access has had this for 2 decades, shuoldn't there be a similar Dynaset, View, Mapping tools for Entity Framework, one that abstracts away the entities from the presentation? Is there one I am not aware of? 3rd party?
Thoughts on this?
If I understand you correctly, you may be looking for Entity Framework with POCO entities. You can find templates for them in the online gallery for templates (when you Add New Item in the project). Alternatively you can use right-click in your .edmx design view, select "Add code generation item" and pick the Fluent Generator.
These methods create multiple files instead of the default all-in-one EF generated file. One such file is the DbContext (as opposed to ObjectContext), one contains only entities (in the form of regular C# objects, no attributes or anything, just plain objects) and the last contains generated mapping in the form of fluent rules.
In this phase you can de-couple the entities file from its template and move it to another assembly. And voila, you have entities independent on the EF infrastructure. You can just pass the context these entities like you would before, and it'll do mapping by itself.
Alternatively you can use tool like AutoMapper, but you'll have to provide the mapping manually, which is a lot of work, but may be good in some cases.
Good design requires work. If it was easy, everyone would do it automatically. After all, everyone wants to do the least amount of work possible.
All the things you are complaining bout are part of the good design process, and there is no getting around them if you want a good design.
If you want to take shortcuts, then by all means, skip them. It's your code. nothing requires you to do things any specific way.
Access can do a lot of things because it's a desktop application, not a web application. Web applications are fundamentally different from desktop applications in how you design them, how they work, and what issues you face with them. For instance, the fact that you have a stateless environment and cannot keep result set from request to request makes many of the things people take for granted in Access impossible to do in a web app.
Specifically, if you want to use views, you can do so. Views are updateable if they are properly designed, but typically require update statements that only affect one table in the view). EF can work with views as well, but it has a lot of quirks you must deal with.
The data mapper pattern has emerged as a common pattern in web design because it's the easiest and straight forward way to have clean separation of concerns between layers and/or tiers. I suggest you find ways to make them work within your development process.
It may also be that MVC is not the most appropriate framework for you to use. It sounds more like you want to build Web apps the way you did Acceess, in which case Visual Studio Lightswitch may be a better choice for you.
http://msdn.microsoft.com/en-us/library/ff851953.aspx
We are currently developing a new WinForms application (C# .NET 3.5).
The project is currently 40% complete however we're spending a considerable amount of time writing the DAL implementation (CRUD). We now want to move NHibernate as an ORM solution to take advantage of its many benefits and to relieve some of the DAL coding work.
We would much rather concentrate on solving business problems.
At the current time we plan to migrate to NHibernate and FluentHibernate but have a few questions.
Is the change to NHibernate worth the steep learning curve? From a performance point of view do you think NHibernate would be a more sensible option than continuing to write our own?
We currently employ "soft delete" and read data through views in the database which have a field "Deleted = null" (Deleted is a TIMESTAMP). From my understanding, when we map each class we can also specify a "Where" clause which means we no longer need any "filtering" views in our database? Is that correct?
In relation to the question above. We also have a "Purge" function that can delete records from the database. Can we employ "soft delete" and still have a purge function?
Can we persist BLOBS to the database through NHibernate?
What would be the best migration strategy for us? How would you get started on a NHibernate migration, keeping in mind that the application has not been released and we are open to having the database structure changed. Ideally I am thinking to map each of our business objects and then have NHibernate generate the schema for us, does this sound like a good way to go?
Can NHibernate work with Lookup data? We currently read lookup data into a global dictionary that we use through the life of the application. Can we still do this with NHibernate.
Apologies if some of these questions are elementary, I am still trying to get a handle on NHibernate.
(Answers to your question below, referencing the original question number)
Going to NHibernate is absolutely worth the learning curve - did it at my current job, and we've never looked back. NHibernate in action is an excellent book to start with.
You can easily include a 'Where' clause as part of your map. We use it for filtering some common-use tables and views in our NHibernate mappings.
For your purge function, just add a secondary map that reverses the where clause (or one without the flag filtered) and you're golden (we sometimes have several maps to the same entities for data shaping).
RE Blobs, etc. here's an article on them by Ayende, and one on Calyptus.
Migration is probably a larger question - personally, we use a repository pattern with an interface for the repository (for unit testing and mocks), a concrete implementation of the repository, and our model (POCOs). We keep no NHibernate specific code anywhere outside of our repositories to reduce dependencies, etc. and to aid in testing.
Again, look at NHibernate in action for some great info on the product, as well as NHForge.org, TekPub for their NHibernate series, etc. (I even have some tutorials on my blog, linked in my profile).
For lookup data, NHibernate works fine, and also supports cacheing.
I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements:
Oracle backend
Rapid development
(L)GPL-free
Free
I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class.
I'm tinkering with reflection right now, but in the meantime I have two questions:
Are there any problems I overlooked with this solution?
Are there any other approaches you would recommend to convert DataSets to POCOs?
Thanks in advance.
There's no correct answer, though you'll find people who will try to give you one. Some things to keep in mind:
Since you can't get the advantages of EF or Linq-to-SQL, don't worry about using the IQuerable interface; you won't be getting the main advantage of it. Of course, once you've got your pocos, LINQ to object will be a great way of dealing with them! Many of your repository methods will return IQueryable<yourType>.
As long as you have a good repository to return your pocos, using reflection to fill them out is a good strategy, at first. If you have a well-encapsulated repository, I say again. You can always switch out the reflection-filled entity object code for more efficient code later, and nothing in you BL will know the difference. If you make yourself dependent on straight reflection (not optimized reflection like nHibernate), you might regret the inefficiency later.
I would suggest looking into T4 templates. I generated entity classes (and all the code to populate them, and persist them) from T4 templates a few months ago, for the first time. I'm sold! My code in my T4 template is pretty horrible this first try, but it spits out some nice, consistent code.
You will have to have a plan for your repository methods, and closely monitor all the methods your team creates. You can't have a general .GetOrders() method, because it will get all the customers every time, and then your LINQ to object will look nice, but will be covering some bad data access! Have methods like .GetOrderById(int OrderID) and .GetOrderByCustomer(int CustomerID). Make sure each method that returns entities uses an index at least in the DB. If the basic query returns some wasted records, that's fine, but it can't do table scans and return thousands of wasted records.
An example:
var Order = From O in rOrders.GetOrderByCustomer(CustID)
Where O.OrderDate > PromoBeginDate
Select O
In this example, all the Order for a customer would be retrieved, just to get some of the orders. But there won't be a huge amount of waste, and CustomerID should certainly be an indexed field on Orders. You have to decide whether this is acceptable, or whether to add a date distinction to your repository, either as a new method or with overloading other methods. There's no shortcut to this; you have walk the line between efficiency and maintaining your data abstraction. You don't want to have a method in your repository for every single data inquiry in your entire solution.
Some recent articles I've found where people are wrestling with how exactly to do this.:
http://mikehadlow.blogspot.com/2009/01/should-my-repository-expose-iqueryable.html
http://www.west-wind.com/WebLog/posts/160237.aspx
Devart dotConnect for Oracle supports the entity framework, you could then use LINQ to Entities.
Don't worry about using reflection to build DTOs from Datasets. They just work great.
An area of pain will be implementation of IComparer for each business object. Only load the data that is minimum requirement at the presentation layer. I burnt my fingers badly on in-memory sorting.
Also, plan in advanced for lazy-loading on DTOs.
We wrote our on Generic library to convert datatable/datarow into entitycollection/entityobjects. And they work pretty fast.