Should I use AutoMapper from ViewModel to Model objects - c#

I want to go the way doing all input validation in a viewmodel.
When the database is accessed the viewmodels data must be copied over to the model sent to database.
My concern is that data could be lost because:
Automapper is wrongly setup for certain scenarios thus properties are not copied over to the model
Or Automapper is just not suited for every scenario maybe too complex ViewModels
Are my concerns justified or should I not care for that?

Automapper is totally fine in this scenario I use it extensively for exactly this purpose. If you are worried about data lose should you not be making use of data annotation validation on the model itself to make sure you have the required data before persisting or calling some other service.
Also the only real way to make completely sure that you dont miss anything using Auotomapper or just manual code mapping is a good set of unit tests.

My concern is that data could be lost because: Automapper is
wrongly setup for certain scenarios thus properties are not copied
over to the model Or Automapper is just not suited for every scenario
maybe too complex ViewModels
I think it is unlikely that you will loose data. For complex
viewModels you can choose to set specific properties manually which
are not copied over by Automapper. As suggested by #feanz good Unit
Test will provide you required assurance.**

Related

Should AutoMapper be used to Map from ViewModel back into Model?

Should AutoMapper be used to take data from ViewModel and save back into a database model?
I know the opposite practice is good software practice: to have Automapper to extract database models, and place them in Viewmodels for front end web applications.
I was reading this general article here:
The article was not specifically about Automapper, but want to validate if the opinion is true.
best way to project ViewModel back into Model
"I believe best way to map from ViewModel to Entity is not to use AutoMapper for this. AutoMapper is a great tool to use for mapping objects without using any other classes other than static. Otherwise, code gets messier and messier with each added service, and at some point you won't be able to track what caused your field update, collection update, etc."
Relocating here: https://softwareengineering.stackexchange.com/questions/387385/should-automapper-be-used-to-map-from-viewmodel-back-into-model

.NET MVC data binding methods

I realize this doesn't necessarily apply to MVC exclusively, but bear with me.
Working with entity framework and models, I've read several articles on "best practices" where some people claim using repositories and unit of work is better, others claim it's overkill and using your models directly within your controllers with linq is better, so on and so forth...
Then we have view-models and lazy loading methods, but then with linq we can use joins to add multiple "models" to our data retrieval to fetch whatever we need directly in our controller or helper class.
I realize a lot of this ties back to the "separation of concerns" that is MVC and we can create multiple layers to map our data back whichever way we want, which is great, but let's say for argument sake my app run exclusively on MS SQL, with no chance of ever transitioning to another database type, will adding all the additional layers of separation to map data back give me any real benefit? I'm just trying to understand at which point does one conclude it's better to do it this way over that way? I know some of this might consist of personal preference, but I'm looking for REAL LIFE scenarios where it's easy for me to conclude one way it better than the other AND what questions I should ask myself when deciding how many mapping layers do I need to get my data from my database to my view?
One of the real benefits is when your models or your UI need to change independently of each other. If your view is only tied to a ViewModel instead of your entity, then you can make all of the mapping changes in one place (your controller) instead of needing to go through every view that your entity is being used and making changes there. Also, with ViewModels, you have the benefit of combining multiple data sources into a single object. Basically, you get a lot more flexibility in how to implement your UI if you don't tie it directly to database tables.

Automapper vs Dapper for mapping

This question is to verify if the current implementation is the right way to go about in terms of best practices and performance. So far in all my previous companies I have been using Auto Mapper to map relational objects to domain model entities and domain model entities to Dtos. The ORM tools have been Entity framework.
In my current company they are using Dapper as ORM tool and do not use AutoMapper as they say Dapper does the mapping for you internally. So the way they have structured the project is create a separate class library project that contains Dtos and reference the Dtos in Dataccess and Business layer.
The query returned by Dapper is internally mapped to the Dtos. These Dtos are returned to the Business layer and so on.
For example
In the code below the Participant function is Dto.
Repository file in DataAccess layer
public List<ParticipantFunction> GetParticipantFunctions(int workflowId)
{
// Update the Action for Participant
string selectSql = #"SELECT [WFPatFunc_ID] AS WFPatFuncID
,[WFFunction]
,[SubIndustryID]
,[DepartmentID]
FROM [dbo].[WF_ParticipantsFunctions]
WHERE [DepartmentID] = (SELECT TOP 1 [DepartmentID] FROM [dbo].[WF] WHERE [WF_ID] = #workflowId)";
return _unitOfWork.GetConnection().Query<ParticipantFunction>(selectSql, new
{
workflowId = workflowId
}).ToList();
}
The reason what I have been told by the developers is that AutoMapper would be just an overhead and reduce speed and since Dapper does mapping internally, there is no need for it.
I would like to know if the practice they are following is fine and have no issues.
There is no right or wrong here. If the current system works and solves all their requirements, then great: use that! If you have an actual need for something where auto-mapper would be useful, then great: use that!
But: if you don't have a need for the thing that auto-mapper does (and it appears that they do not), then... don't use that?
Perhaps one key point / question is: what is your ability to refactor the code if your requirements change later. For many people, the answer there is "sure, we can change stuff" - so in that case I would say: defer on adding an additional layer until you actually have a requirement for an additional layer.
If you will absolutely not be able to change the code later, perhaps due to lots of public-facing APIs (software as a product), then it makes sense to de-couple everything now so there is no coupling / dependency in the public API. But: most people don't have that. Besides which, dapper makes no demands on your type model whatseover other than: it must look kinda like the tables. If it does that, then again: why add an additional layer if you don't need it?
This is more of an architecture problem and there is no good or bad.
Pros of DTOs:
Layering - You are not directly using data object so you can use Attributes for mapping and stuff not needed in your UI. That way you can have the DTOs in a library that has no dependencies to your data access stuff.(Note you could do this with fluent mapping but this way you can use what you like)
Modification - If you domain model changes your public interface will stay the same. Lets say you add a property to the model all the stuff you already build wont be getting the new field in you JSON for no reason.
Security - This is why Microsoft started pushing DTOs if I remember correctly I think CodePlex(Not 100% sure it was them) was using EF entitles directly to add stuff to the database. Someone figure this out and just expanded the JSON with stuff that he was not allowed to access, for example if you have a post that has a reference to a user you could change the role of the user by adding a new post because of change tracking. There are ways to protect you self from this but security should always be an opt-out not opt-in.
I like to use DTOs when I need to expose BI level to a public interface. For example if I have an API that has System operations like api/Users/AllUsers or api/Users/GetFilteredUsers.
No DTOs
Performance - Generally not using DTOs will run faster. No extra step of mapping. Projections help with this but you can really optimize when you know what you need to do.
Speed of development and smaller code base - Sometime a large architecture is an overkill and you just want to get things done. And you are not just doing copy paste of your properties for most of you time.
More flexibility - This is opposite to the security sometimes you want to use the same api to do more then one thing. For example if you want to have the UI decide what it wants to see from a big object. Like select and expand. (Note this can be done with DTOs but if you ever tried to do expand with DTOs you know how tricky it can get)
I use it when I need to expose the data access level to the client, if you need to use Breeze or JayData and/or Odata. With api like api/Users and api/Users?filter=(a=>a.Value > 5)

Dal (with Entity Framework) and Model layers into MVC

First of all , I use EF into Dal layer (separeted project from MVC, same solution). The generated models from EF's EDMX file are the actual models from Model layer?? If so, how do I access these models to work in MVC's View layer? I think it's wrong accessing data layer directly from view to work with these models, and If I make a Model layer with "my models" and translate Dal's models into mine models... it'll be duplicated code.
Probably I'm getting something wrong, but most of egs. are with code first approach, and I can't figure out this.
You are right in your idea of not accessing access the Models in your DAL directly from your presentation layer.
To avoid duplicating code when translating your DAL objects into the Models used by your views, you could use something like AutoMapper, which is supposed to do the heavylifting for you in exactly that scenario.
I think it's wrong accessing data layer directly from view to work with these models...
That's right, the appropriate method is using View Model
When you have dozens of distinct values to pass to a view, the same flexibility that allows you to
quickly add a new entry, or rename an existing one, becomes your worst enemy .You are left on your
own to track item names and values; you get no help from Microsoft IntelliSense and compilers .
The only proven way to deal with complexity in software is through appropriate design. So defining an object model for each view helps you track what that view really needs. I suggest you define a
view-model class for each view you add to the application.
-- "Programming Microsoft ASP.NET MVC" by Dino Esposito
A ViewModel provides all information that your view requires to make itself. To transfer data from ViewModel and a business entity you can use AutoMapper.
Don't worry about duplication, those are two different concept and should be separated from each other; it makes your application easy to maintain.
I may be mistaking but to me, using generation from EDMX provides you with the DbContext which could be considered as the DAL and entities which could be considered as the Model.
So you might directly manipulate entity instances as your business object. Manipulation of the base through the DbContext should appear in the BLL layer. Additionally, you might implement DTOs where needed.
This, of course, assumes you want to use entity framework code generation. Other options like using POCOs might be more relevant considering your overall architecture.
I use a view model in my local project and the models in the other project. Then put references to the models im gonna use on the page in my view model. Then reference the view model on my page. Let me know if that sounds like something you want to do and I can edit in some code.

What to do when Model is exactly the same as ViewModel?

I want to know what is best practice. I have been told to always create ViewModels and never use core Model classes for passing data to Views.
That makes sense.
Lets me separate the things out. But what is Model is exactly the same as ViewModel. Should I recreate another class or just use it.
I feel that I should recreate. Just want to know what the experts say..
You should definitely still create a separate view model, even if it is identical to your domain entity. The view model and the domain entity should be completely independent, i.e. you should be able to change one without the other needing to know or care about the change. Your view model should represent your view and your domain entity should... well... represent your domain entity. They might be identical now but, if either changes, the change in one should not affect the other.
What if your domain model suddenly changes and now has fields that are no longer relevant to your view model? If they aren't separate, you have a problem. Or, worse (and probably more likely), what if your view model suddenly needs more information, from a totally distinct entity? Are you going to break the encapsulation of classes in your domain model with this totally irrelevant information, just to make it accessible in your view?
Keep your solution decoupled and flexible. Use view models.
I would suggest to create a ModelView, by the way. So in this, particular case it would be the same, it would work like a "bridge" between UI and a model, on which data trasmittes.
But it's good for scallability, cause, it's very likely that you would like to add something UI specific to your view model, so it will defer from the model itself more and more.
So general advice: create it by the way, even if now they are the same, cause it helps you scale when you will need it after.
But what is Model is exactly the same as ViewModel. Should I recreate
another class or just use it.
If it's exactly the same you don't need a view model of course. But that is a pretty rare situation.

Categories