We are developing large MVC project and we have an intension to use HttpSessionStateWrapper (as well as HttpRequestWrapper, HttpResponseWrapper, etc) to add extended functionalities do this objects. It would be adding session messages, additional collections, html metadata with response - stuff like that, managable from controllers and accessible in the views when needed.
I have done it in a smaller project and it gennerally worked well, except some casting issues here and there, but it can be worked around by not using wrappers outside controllers or eventually views. Every controller would be a custom controller with a code like that:
public class Controller : System.Web.Mvc.Controller
{
public new CustomHttpResponse Response
{
get
{
return (CustomHttpResponse)HttpContext.Response;
}
}
public new CustomHttpRequestRequest
{
get
{
return (CustomHttpRequestRequest)HttpContext.Request;
}
}
//etc...
}
ContextWrapper would be created in a custom MvcHandler. Response, request and session wrappers would be created and taken from ContextWrapper .
Is this a good policy to use wrappers to extend functionalities, or they where intended only for creating testing mocks?
Related
I've never worked with a .Net Core project before but have a history with .Net including MVC and entity framework. I'm working with a new .Net Core project which has five solution folders, EHA.PROJ.API, EHA.PROJ.DTO,EHA.PROJ.Repository, EHA.PROJ.Repository.Test and EHA.PROJ.Web. The EHA.PROJ.DTO folder has a number of files such as CategoryDTO.cs which looks like this
namespace EHA.PROJ.DTO
{
public class CategoryDescDTO
{
public int CategoryRef { get; set; }
public string CategoryName { get; set; }
}
}
I'm looking to set up a mapping arrangement to get the data from the EHA.PROJ.DTO files to the model files in my models folder in my EHA.PROJ.Web folder. I've been browsing as I've never done anything like this before as I've previously worked with data from a DAL folder using entity framework and connection done through connection strings. I'm guessing that there must be some process to map the data in my dbContext to connect the files in both folders. I did find some information on AutoMapper but was unsure how to implement it.
This arrangement with .Net Core is new to me so if anyone can help with any examples or point me in the right direction I would be grateful.
Your first problem is having your entities in your web project. Right off the bat, you have tight-coupling between the web project and your data layer, which then pretty much negates the point of all your other layers: DTO, repository, etc. You want to move out your entities and context into a true data layer (i.e. a class library project separate from your web project).
Then, you want to decide how far your data layer should extend. If the API is to feed the Website, then you want to actually remove all dependencies on the data layer from the web project. Your DTO project would be shared between the API and Web projects and your API would send/receive your DTOs, mapping back and forth from your entities under the hood.
However, if you're going to do that, then the repository project should just go away entirely. Just have your API work directly with EF and your entities. Your abstraction is the API itself; there is no need for another. The only reason to have the repository layer is if both the API and Web will both directly utilize the repositories, which isn't a very good pattern actually. You'll inevitably end up with a bunch of duplicated logic specific to each project.
Simply, the repository pattern is superfluous when using an ORM like EF. The ORM is your data layer. You're simply using a DAL provided by a third-party, rather than one you created yourself. The repository pattern only makes sense when working directly with SQL using something like ADO.NET directly. Otherwise, get rid of it.
Having an API is enough of an abstraction, if your goal is simply to hide the data layer. The website knows nothing of the underlying data source, and an API is really just a service layer that returns JSON over HTTP rather than object instances directly, i.e. the API is essentially your "repository" layer.
The situation can be improved even further by moving to a microservices-based architecture. With that, you essentially have multiple small, self-contained APIs that work with just one part of your domain or piece of functionality. Each can utilize EF directly, or an entirely different ORM, or even an entirely different stack. You could have APIs build on Node.js or python, etc. The website simply makes requests to the various services to get the data it needs and doesn't know or care how those services actually work.
I have been using Automapper for quite some time in .NET Core projects due to ease of use and built-in dependency injection.
Install from PM:
Install-Package AutoMapper
Install-Package AutoMapper.Extensions.Microsoft.DependencyInjection
Register in the Startup.cs, ConfigureServices method:
services.AddAutoMapper(typeof(Startup));
Create a class to keep your mappings, e.g. MappingProfile.cs using Profile from automapper, you can define mappings.
public class MappingProfile : Profile
{
public MappingProfile()
{
CreateMap<Operator, OperatorDto>().ReverseMap();
}
}
}
The above mapping tells automapper that Operator can be mapped to OperatorDto and OperatorDto can be mapped to Operator.
In your controller, you can inject an IMapper
private readonly IMapper _mapper;
public OperatorsController(IMapper mapper)
{
_mapper = mapper;
}
and map values like below:
var dto = _mapper.Map<OperatorDto>(op); // Map op object to dto
var op = _mapper.Map<Operator>(dto); // Map dto to op object
Automapper offers custom mappings, should you need it.
While it is very easy to perform mappings with Automapper, you need to learn the framework.
I believe it is worth the effort to learn it as it will save you a lot of time writing mapping code in the future.
This article is a good reference to start: https://buildplease.com/pages/repositories-dto/
My suggestion is to have a DTO assembler that maps your model to the DTO object. So, you start with your DTO class:
namespace EHA.PROJ.DTO
{
public class CategoryDescDTO
{
public int CategoryRef { get; set; }
public string CategoryName { get; set; }
}
}
Then build the assembler:
public class CategoryDescAssembler {
public CategoryDescDTO WriteDto(CategoryDesc categoryDesc) {
var categoryDescDto = new CategoryDescDTO();
categoryDescDto.CategoryRef = categoryDesc.CategoryRef;
categoryDescDto.CategoryName = categoryDesc.CategoryName;
return categoryDescDto;
}
}
Now you implement the service to do all the work required to get the DTO object:
public class CategoryDescService : ICategoryDescService {
private readonly IRepository<CategoryDesc> _categoryDescRepository;
private readonly CategoryDescAssembler _categoryDescAssembler;
public CategoryDescService(IRepository<CategoryDesc> categoryDescRepository, CategoryDescAssembler categoryDescAssembler) {
_categoryDescRepository= categoryDescRepository;
_categoryDescAssembler= categoryDescAssembler;
}
public CategoryDescDTO GetCategoryDesc(int categoryRef) {
var categDesc = _categoryDescRepository.Get(x => x.CategoryRef == categoryRef);
return _categoryDescAssembler.WriteDto(categDesc);
}
}
With the interface looking like this:
public interface ICategoryDescService
{
CategoryDescDTO GetCategoryDesc(int categoryRef);
}
You would then need to add the service to your Startup.cs:
public void ConfigureServices(IServiceCollection services)
{
...
services.AddTransient<ICategoryDescService, CategoryDescService>();
}
Now you can call your service from you view controller.
I have detected, that during loading the main page several controllers are instantiated (I think because the main page is built from several parts). The controllers instantiate the API classes to query some data through them. I was wondering how and where I could share the same API class instance between them.
I can imagine such a code:
class HomeController : Controller
{
private MyApi Api;
public HomeController()
{
this.Api = get the pervious MyApi instance form somewhere
if (this.Api == null) // 1st time
{
this.Api = new MyApi();
put this instance to somewhere to share between controllers
}
This "somewhere" is not a session, because next page load needs another MyApi instance. It must go to an object property which remains intact during the whole page load process, but is dismissed when the html result is generated. It must be really a simple thing, but I really don't know where it is :( Could somebody help me?
You can consider using Microsoft Unity Framework in your application.
Using Unity Dependency Injector you will be able to inject instances of MyApi class into the any controller and avoid writing " if (this.Api == null) " these types of checks and also managing instances of it in some Session or Application level variables, which makes code dirty.
For this specific problem "It must go to an object property which remains intact during the whole page load process, but is dismissed when the html result is generated", You can configure Unity Injected object to have a life time of "Scoped". Meaning, the object will be created once per request.
Here's is a link on configuring Unity in an asp.net core application
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-2.2
I have seen many posts of how to setup a session per request in Asp.Net MVC by using ActionFilter or by a DI package to inject the session into the controller. What I wanted to know was, will it be a bad idea/pattern to just make an extension method like :
public static ISession GetNHibernateSession(this Controller controller)
{
return SessionFactory.OpenSession();
}
so that the session can be instantiated when required like :
public ActionResult DoSomething()
{
using( var session = this.GetNHibernateSession())
{
// Do something with the session
}
}
reasons why this may be a good/bad idea will be greatly appreciated
Good:
It's simple
It just works
Bad:
You are doing session management, even if it's just a three lines, all over your code
With an extension method, you can't replace the behavior for testing
In short, for small, RAD and proof-of-concept projects, your idea will work just fine. For more complex development, it's probably better to extract session management from the controllers, at least moving it to a base class.
I am starting with MVC 3 and am planning in separating the model and the controllers into their own separate projects. I'll follow the suggestions made from this post for this:
asp.net mvc put controllers into a separate project
The purpose of separating them into separate projects is that there are chances that I may have to add a web service project to the solution and I’d like it to reuse the same functionality exposed by the controller project. So the solution will be formed of two view projects, WebServices and WebSite, the controller project and the model project.
I’d like to know if this is possible and if it’s a common scenario with MVC.
Update 1:
With your suggestions I agree and think it’s best to keep the view and the controllers together.
Would it be possible to have a hybrid of MVC and MVP? I have a feeling I am really overdoing things here so please let me know what you think.
So I would have:
1 – Web project with controllers.
2 – WebServices project
3 – Presenters/Interfaces.
4 – Model.
The controllers would then become the views in an MVP model. Also each web service would become a view in an MVP model.
For instance we could have the following, interface, presenter , controller.
public interface ICustomers {
string[] Customers{set;}
}
public class CustomerPresenter {
ICustomers view = null;
public CustomerPresenter(ICustomers view) {
this.view = view;
}
public void GetCustomers() {
view.Customers = new string[]{"Customer1","Customer2"};
}
}
public class CustomerController:ICustomers {
CustomerPresenter presenter = null;
public CustomerController() {
presenter = new CustomerPresenter(this);
}
private string[] customers = null;
public string[] Customers {
set { throw new NotImplementedException(); }
}
public void GetCustomers() {
presenter.GetCustomers();
//Return view.
}
}
The WebService would be a view in an MVP model.
public class CustomerWebService:ICustomers {
CustomerPresenter presenter = null;
public CustomerController() {
presenter = new CustomerPresenter(this);
}
[WebMethod]
public void GetCustomers() {
presenter.GetCustomers();
//Return response.
}
My projects are built specifically for the reason you stated, you want to implement a web service. I would not recommend separating the controllers because this is an actual part of the web project. What you actually want is around 3-4 different projects.
Repository/data layer (may contain your domain level models)
Domain Layer (optional)
Service layer. (Now this is where you can point your web service to very easily, all your re-usable logic should be here, not in the controller)
Web Layer. (Contains View Models, Views and Controllers)
I placed them in levels. Basically, the repository, domain and service layer are completely de-coupled, meaning you can use these libraries without a server or asp.net. A wpf application can just call to the service layer because the web layer is just for presentation purposes.
I am not sure it's common to separate the views and the controllers on their own projects. It might be but I haven't seen it personally.
The way I would split it initially is:
One project for views and controllers
One project for models
If and when you have the need to support different views you could update your controller to return something different depending on the type of request. It's common to have controllers that return different things for different clients (for example HTML vs JSON.)
I am working on creating a Web site like www.hipmunk.com in ASP.NET web forms.
I need to pull the data from multiple API's and compare the rates and show the different rate options to the users.
What is the best way to achive this?
I am browsing around and see "Windows Workflow Foundation" may be an option.
Anyone got suggestions for me? I am just looking for architectural suggestions.
Thanks
EDIT: Multiple API: - Each OTA's have different type of API's but so far I have seen everyone supports the XML/JSON format
I'm assuming that you are trying to capture the same data from each of the services, but they may each have a different calling pattern, and they will definitely represent their results differently. One way to handle the differences in each of the services is to encapsulate them using the Strategy Pattern. You would have a standard results object, but based on the list of requested services, you would use a different strategy to fill each one.
public interface IResultsStrategy {
RateResults GetRateResults(SiteInfo site);
}
public class SpecificSiteStrategy {
public RateResults GetRateResults(SiteInfo site) {
//access the service through WCF, or whatever makes the most sense
//create a new RateResults object, and fill it with the appropriate data
}
}
public class AnotherSiteStrategy {
public RateResults GetRateResults(SiteInfo site) {
//access the service through a web request, or whatever makes the most sense
//create a new RateResults object, and fill it with the appropriate data
}
}
public class RateFetcher {
public IEnumerable<RateResults> GetRates() {
var rateResults = new List<RateResults();
foreach(SiteInfo site in SitesToFetch) {
IResultsStrategy strategy = GetStrategy(site);
rateResults.Add(strategy.GetRateResults(site));
}
return rateResults;
}
}