I am making a little API in asp.net. It is coupled with an MVC webapp. I intend for the webapp to use it's own API instead of creating 2 backends that do the same thing.
I am struggling a little to try and keep my duplicate code to a minimum when it comes to creating a model to use in the API for both "incoming" (Post, Put) and "outgoing" (get) actions.
I have a class called Event, it contains properties that are easy to serialize to JSON plus a few complex types. I created a model called EventViewModel (is it appropriate to still call it a view model in API?) with some extra properties to get the Names out of the complex types.
Ideally, I'd like to re-use this Model for everything, but when it comes to defining[Required] tags, my logic breaks down...
I first thought of using [Bind(Include() Exclude()] on each of the API actions. Does this sound like a viable solution?
What other solutions have people used?
Thanks!
Here are some thoughts.
I would use separate controllers for view (Controller) and api (ApiController) since we are talking about two different presentations of data. I would not use the controller as an api, since both return different types. An api controller uses statuscodes in a different way. E.g. ApiController.Delete returns statuscode 204 No Content, while Controller returns statuscode 200 Ok with a view.
You can call the models for Controller ViewModels and for ApiController DTO's. In either case, they are just simple objects. Don't put any logic in those objects except for validation (which you can test with the ModelState) and don't use entity objects.
Do not use models / dto's for multiple purposes for the reason you've already encountered. Just use simple objects and use those only once, keep it simple. In that case, if you want to change something you know it is not going to break your application.
You can save code by using base classes that you can inherit. This will also give you the advantage of writing extensions once for different objects.
But if you insist in reusing objects, you can decide not to use ModelState.Validate in Api's. I think in that case the [required] tag is ignored, but I am not sure. As for the objects, I would not use JsonIgnore / Xmlignore but set default values instead. In that case these properties are omitted on serialization. Makes it easy to use objects for both POST and GET.
[DefaultValue(0)]
public int Id { get; set; } = 0;
Use repositories. You can call the repositories in both the controller as in the api controller. This is where you can really save code.
For Api's (not CRUD) I prefer to write specialized linq queries that return anonymous objects and select those into the dto. This gives multiple advantages. Sometimes I use extensions (on IQueryable) to save code.
You may want to consider using something like automapper to save code, but I wouldn't recommend this.
Related
To speed up development in a small team (and also to enforce consistent REST-ish API), I have created a generic base API controller and also a related generic business service base class that has typical CRUD methods for domain root entities:
create
get by key
update
delete
get a page of records (with a simple query expression and pagination that gets automatically parsed into a Linq expression
get a simple list with all records (id-name pair projections for many cases when that's all you need, e.g. for a dropdown list).
It worked pretty well for most cases. We also had similar architecture on Angular side with some code generation scripts and we could spit out hundreds of forms for a small ERP-like system in a short time.
However, we had quite a few entities that did not need all those CRUD methods. Our Swagger definitions were polluted with methods that should not actually be available and we had to override them and add ignore attributes and throw NotImplementedException as needed.
Is there any clean solution to this CRUD inheritance mess? Is it possible to somehow specify to ASP.NET "hey, for this controller I want to generate routes only for these specific CRUD base methods and ignore all the other ones"? Of course, I would still want to be able to add non-CRUD custom routes and methods to controllers, as usual; so I don't need fully dynamic runtime-generated controllers but only their methods.
In an ideal world, .net should have traits like PHP, so that I could add e.g. "GetByIdTrait" etc...
I have created a few razorpages and have been putting a lot of code inside the Post and Get methods i.e
public async Task<IActionResult> OnPostSaveSetStatusAsync(int? id) {
}
think of it as opening a detailed page containing a button that should set a status on a specific order.
I now need to set the same status, i.e executing the same code but from a view (another razorpage) having the order selected.
if I put all my code in helper classes there is a lot of methods parameters that need to be sent in. i.e sqlcontexts, cache, httcontext etc. is this a good approach anyway?
...or should I just create methods in the original razorpage and call it from all other places? (like helper methods inside the class)
thanks
Plain and simple: if there's any code that needs to be shared in multiple places, it should go into another class that can be used as a dependency where it's needed. The listed dependencies don't seem like too much, though you should strongly consider whether you need a dependency on HttpContext. Most of the time, you should really just be passing in some value to a method. For example, if you need to work with a user id, pass the user id into a method on your helper class, rather than making the helper class take a dependency on HttpContext and fetch the id itself.
If you still have too many dependencies, then you're likely breaking the single responsibility principle. You may need multiple helper classes, or a different strategy entirely. It's hard to say anything definitive without knowing exactly what you're doing.
Summary
This question is for a methodology. The answer should be a link to the holy grail in working with contexts for the described scenario.
We have been experiencing different problems in our MVC web application project, related to the use of dbContext.
After reading many question-answer blogs, articles ... including proposals with repositories and injection patterns, Owin, Entity Framework, Ninject, we are still not clear about the right way to work with dbContext’s.
Is there any article, demo, with “The Way” to do it in a more complex application than just “CRUD” operations using separation between MVVC-presentation / Domain Entities / Logic / DataAccess layers, including Identity security handling users and roles permissions?
Description
Previously, our approach was to create dbContext objects when needed in each repository.
Soon we discovered errors like “dbContext is disposed” since the connection dies together with the repository function. This makes the retrieved objects “partially available” to the upper layers in the app (using the trick .ToList(), limited because we can access collections and attributes but not later navigation into the object child tables, and so on). Also using 2 contexts from different repositories, we got an exception telling that 2 contexts are trying to register changes to the same object.
Due to timed commitments to deliver prototypes, we created a single static dbContext shared for the whole application, which is called from everywhere when needed (Controllers, Models, Logic, DataAccess, database initializers). We are aware that is a very dirty workaround but it has been working better than the previous approach.
Still with problems: dbContext can handle only 1 async method call at a time, and we can have many calls (eg. userManager.FindByNameAsync - there are only async methods). Exception: “A second operation started on this context before a previous asynchronous operation completed”.
We were thinking about creating the context as the very first step when an action is called in the controller, then to carry this object as “relay race” to every other layer or function called. In this way the connection will live from the “click in the browser” until the response is loaded back on it. But we don’t like the idea that every single function must have an extra parameter “context” just to share the connection through the layers for the entire operation route.
We are sure that we are not the first ones wondering about what is the right way to use contexts.
Application layers
We have these (logical) layers, differents workspaces, but same webapp MVC project, top to down:
Views: HTML + Razor + JQuery + CSS. Code here is restricted to the layout, but some HTML might depend on the Role. Method calls are to controllers only, plus utils (like formatting).
ViewModels: The data container to be exchanged between Controllers and Views. Classes only define attributes, plus functions to convert to and from Domain entities only (Translators).
Controllers: Actions called from the browser result in calls to functions in the Logic layers. Authentication here restricts access to actions or limits inside an action. Controllers avoid using Domain entities but ViewModels, so that to communicate with Logic layer ViewModels translation functions are called.
Domain Entities: Used for the logic layer, and used to create database tables by Entity Framework.
Logic Classes: A Domain entity has an EntityLogic class with all the operations. These are the core where all the rules that are common and abstracted from specific consumer clients (ViewModels are unknown).
Repositories: To access the database. Not sure if we do need this since Domain entities are already mapped to objects in database by Entity Framework.
Typical scenario
The browser calls an action (POST) in the Products controller to edit a product. The ProductViewModel is used as container of the data.
The controller action is restricted to a collection of roles. Inside the action, depending of the role, a different Logic function is called and ProductViewModel is translated to ProductDomainEntity and passed as parameter.
The logic EditProduct function calls others functions in different logic classes and also use localization and security to restrict or filter. The logic may or may not call a Repository to access the data, or to use a global context for all, and deliver the resulting domain entity collections to the Logic.
Based on the results, the logic may or may not try to navigate the results’ children collections. The results are given back to the controller action as domain entity (or collection of), and depending of this results, the controller may call more Logic, or redirect to another action or respond with a View translating the results to the right ViewModel.
Where, when and how to create the dbContext to support the whole operation in the best way?
UPDATE: All classes within the Logic layer are static. The methods are called from controllers simply like this:
UserLogic.GetCompanyUserRoles(user)
, or
user.GetCompanyRoles()
where GetCompanyRoles() is an extension method for User implemented in UserLogic. Thus, no instances for Logic classes means no constructors to receive a dbContext to use inside its methods.
I want a static method inside a static class to know where to get the instance of the dbContext active to the current HttpRequest.
Could NInject and OnePerRequestHttpModule help with this? Someone who tried?
I don't believe there is a "Holy Grail" or magic-bullet answer to this or any other problem with EF / DbContexts. Because of that, I also don't believe that there is one definitive answer to your question, and that any answers will be primarily opinion-based. However I have found personally that using a CQRS pattern rather than a repository pattern allows for more control and fewer problems when dealing with EF semantics and quirks. Here are a few links that you may (or may not) find helpful:
https://stackoverflow.com/a/21352268/304832
https://stackoverflow.com/a/21584605/304832
https://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=91
https://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=92
http://github.com/danludwig/tripod
Some more direct answers:
...This makes the retrieved objects “partially available” to the upper layers in the app (using the trick .ToList(), limited because we can access collections and attributes but not later navigation into the object child tables, and so on). Also using 2 contexts from different repositories, we got an exception telling that 2 contexts are trying to register changes to the same object.
The solutions to these problems are to 1) eager load all of the child and navigation properties that you will need when initially executing the query instead of lazy loading, and 2) only work with 1 DbContext instance per HTTP request (inversion of control containers can help with this).
Due to timed commitments to deliver prototypes, we created a single static dbContext shared for the whole application, which is called from everywhere when needed (Controllers, Models, Logic, DataAccess, database initializers). We are aware that is a very dirty workaround but it has been working better than the previous approach.
This is actually much worse than a "dirty workaround", as you will start to see very strange and hard to debug errors when you have a static DbContext instance. I am very surprised to hear that this is working better than your previous approach, but it only points out that there are more problems with your previous approach if this one works better.
We were thinking about creating the context as the very first step when an action is called in the controller, then to carry this object as “relay race” to every other layer or function called. In this way the connection will live from the “click in the browser” until the response is loaded back on it. But we don’t like the idea that every single function must have an extra parameter “context” just to share the connection through the layers for the entire operation route
This is what an Inversion of Control container can do for you, so that you don't have to keep passing around instances. If you register your DbContext instance one per HTTP request, you can use the container (and constructor injection) to get at that instance without having to pass it around in method arguments (or worse).
ViewModels: The data container to be exchanged between Controllers and Views. Classes only define attributes, plus functions to convert to and from Domain entities only (Translators).
Little piece of advice: Don't declare functions like this on your ViewModels. ViewModels should be dumb data containers, void of behavior, even translation behavior. Do the translation in your controllers, or in another layer (like a Query layer). ViewModels can have functions to expose derived data properties that are based on other data properties, but without behavior.
Logic Classes: A Domain entity has an EntityLogic class with all the operations. These are the core where all the rules that are common and abstracted from specific consumer clients (ViewModels are unknown).
This could be the fault in your current design. Boiling all of your business rule and logic into entity-specific classes can get messy, especially when dealing with repositories. What about business rules and logic that span entities or even aggregates? Which entity logic class would they belong to?
A CQRS approach pushes you out of this mode of thinking about rules and logic, and more into a paradigm of thinking about use cases. Each "browser click" is probably going to boil down to some use case that the user wants to invoke or consume. You can find out what the parameters of that use case are (for example, which child / navigation data to eager load) and then write 1 (one) query handler or command handler to wrap the entire use case. When you find common subroutines that are part of more than one query or command, you can factor those out into extension methods, internal methods, or even other command and query handlers.
If you are looking for a good place to start, I think that you will get the most bang for your buck by first learning how to properly use a good Inversion of Control container (like Ninject or SimpleInjector) to register your EF DbContext so that only 1 instance gets created for each HTTP request. This should help you avoid your disposal and multi-context exceptions at the very least.
I always use a BaseController that contains a dbContext and passes it to the logic functions (Extensions i call). That way you only use one context per call and if something fails it will do a rollback.
Example:
Controller1 that inherits BaseController
Controller1 now have access to the property db that is a context
Controller1 contains an action "Action1"
Action1 will call the function "LogicFunctionX(db, value1, Membership.CurrentUserId, true)"
In Action1 you can call other logic functions or even call them inside "LogicFunctionX". Always passing the property db through functions.
To save the context i do it inside the controller (mostly) after calling all the logic functions.
Note: the argument true that i pass in LogicFunctionX is to save the context inside or not. Like:
if(Save)
db.SaveChanges();
I had several problems before doing this.
Let's say that I have a Domain assembly that describes the domain model, and it has a class called product:
public class Product
{
public int Id { get; set; }
public string Name { get; set; }
}
I have also another assembly that is the web application running with this domain model. Now I want to create a form to create new products and have some validation on the attributes. The easiest way to do this is to use DataAnnotations on the class. However this results in that the domain model now contains metadata about form validation, which is not a very clear separation of concerns.
It is possible to have the MetadataType attribute for the class but I see this as no better. Suddenly your domain model class has a dependency on the form validation metada class.
Another way is to create a CreateProductForm class and add the required attributes there, and do mapping between the classes. However this creates some overhead as you need to maintain these classes seperately and changes in one might break the other. It might be desirable in some scenarios to do that, but in some others it might just create extra work (imagine that you have an Address class, for example).
UPDATE: some people have suggested that I use AutoMapper for this, which I'm already aware of. AutoMapper just makes mapping simpler and easier, does not actually solve the problem of having to maintain two separate classes which will be almost identical. My preference would be to only create the form classes when there is a distinct need for it.
Is there a straightforward to declare the annotations within the web assembly, without creating unnecessary dependencies for the domain assembly?
If you don't want to introduce coupling between your domain model and your views, you should go for the CreateProductForm class way.
Depending on your project size/requirements, you're going to have to separate your view model from your domain sooner or later. Suppose you're using the DisplayName attribute : are you going to tag your domain entities ?
Using a tool like AutoMapper simplifies greatly the mapping process.
Why wouldn't you have DataAnnotations on your domain classes. If there is something that is Required, then I think it's perfectly valid to mark it as required in the domain.
Other DataAnnotations such as StringLength, Range etc, all to me perfectly valid things to decorate your domain entities with.
Implementing IValidableObject is also a perfectly acceptable thing for domain object to do IMHO.
I wouldn't go putting UI stuff on them such as UIHint though or annoations describing the formatting of the property. That would be bad.
Normally I avoid displaying domain classes on the user interface, and use ViewModel classes with a mapping tool such as AutoMapper etc to map from one to the other. The ViewModel class has the annoations of the domain class with perhaps additional UI specific annotations.
As mathieu and XHalent state you should use a CreateProductForm (or a CreateProductFormViewModel) along with Automapper and create attribues that automap the model to the viewmodel for the action.
That way all the form validation goes on your view model and all the data validation (related to the database) goes in your domain model.
In Silverlight and WPF it is called the MVVM pattern and a lot of people who do asp.net mvc recommend it.
In my current project I am also using it with Automapper. All my views have an associated view model that is a flattened version of the domain model specific to that view.
I think this was the example I used (It's the one I still have bookmarked anyway. but this one linked in the first one seems better.)
Using the attribute means that you return the domain object from your action in the controller and the automap attribute maps the domain object to your viewmodel automatically.
Doing this should give you the seperation you are looking for.
I think I've hit that "paralysis by analysis" state.
I have an MVC app, using EF as an ORM.
So I'm trying to decide on the best data access pattern, and so far I'm thinking putting all data access logic into controllers is the way to go.. but it kinda doesn't sound right.
Another option is creating an external repository, handling data interactions.
Here's my pros/cons:
If embedding data access to controllers, I will end up with code like this:
using (DbContext db = new DbContext())
{
User user = db.Users.Where(x=>x.Name == "Bob").Single();
user.Address.Street = "some st";
db.SaveChanges();
}
So with this, I get full benefits of lazy loading, I close connection right after I'm done, I'm flexible on where clause - all the niceties.
The con - I'm mixing a bunch of stuff in a single method - data checking, data access, UI interactions.
With Repository, I'm externalizing data access, and in theory can just replace repos if I decide to use ado.net or go with different database.
But, I don't see a good clean way to realize lazy loading, and how to control DbContext/connection life time.
Say, I have IRepository interface with CRUD methods, how would I load a List of addresses that belong to a given user ? Making methods like GetAddressListByUserId looks ugly, wrong,
and will make me to create a bunch of methods that are just as ugly, and make little sense when using ORM.
I'm sure this problem been solved like million times, and hope there's a solution somewhere..
And one more question on repository pattern - how do you deal with objects that are properties ? E.g. User has a list of addresses, how would you retrieve that list ? Create a repository for the address ? With ORM the address object doesn't have to have a reference back to user, nor Id field, with repo - it will have to have all that. More code, more exposed properties..
The approach you choose depends a lot on the type of project you are going to be working with. For small projects where a Rapid Application Development (RAD) approach is required, it might almost be OK to use your EF model directly in the MVC project and have data access in the controllers, but the more the project grows, the more messy it will become and you will start running into more and more problems. In case you want good design and maintainability, there are several different approaches, but in general you can stick to the following:
Keep your controllers and Views clean. Controllers should only control the application flow and not contain data access or even business logic. Views should only be used for presentation - give it a ViewModel and it will present it as Html (no business logic or calculations). A ViewModel per view is a pretty clean way of doing it.
A typical controller action would look like:
public ActionResult UpdateCompany(CompanyViewModel model)
{
if (ModelState.IsValid)
{
Company company = SomeCompanyViewModelHelper.
MapCompanyViewModelToDomainObject(model);
companyService.UpdateCompany(company);
return RedirectToRoute(/* Wherever you go after company is updated */);
}
// Return the same view with highlighted errors
return View(model);
}
Due to the aforementioned reasons, it is good to abstract your data access (testability, ease of switching the data provider or ORM or whatever, etc.). The Repository pattern is a good choice, but here you also get a few implementation options. There's always been a lot of discussion about generic/non-generic repositories, whether or not one should return IQueryables, etc. But eventually it's for you to choose.
Btw, why do you want lazy loading? As a rule, you know exactly what data you require for a specific view, so why would you choose to fetch it in a deferred way, thus making extra database calls, instead of eager loading everything you need in one call? Personally, I think it's okay to have multiple Get methods for fetching objects with or without children. E.g.
public class CompanyRepository
{
Get(int Id);
Get(string name);
GetWithEmployees(int id);
...
}
It might seem a bit overkill and you may choose a different approach, but as long as you have a pattern you follow, maintaining the code is much easier.
Personally I do it this way:
I have an abstract Domain layer, which has methods not just CRUD, but specialized methods, for example UsersManager.Authenticate(), etc. It inside uses data access logic, or data-access layer abstraction (depending on the level of abstraction I need to have).
It is always better to have an abstract dependency at least. Here are some pros of it:
you can replace one implementation with another at a later time.
you can unit test your controller when needed.
As of controller itself, let it have 2 constructors: one with an abstract domain access class (e.g. facade of domain), and another (empty) constructor which chooses the default implementation. This way your controller lives well during web application run-time (calling empty constructor) and during the unit-testing (with mock domain layer injected).
Also, to be able to easily switch to another domain at a later time, be sure to inject the domain creator, instead of domain itself. This way, localizing the domain layer construction to the domain creator, you can switch to another implementation at any time, by just reconstructing the domain creator (by creator I mean some kind of factory).
I hope this helps.
Addition:
I would not recommend having CRUD methods in domain layer, because this will become a nightmare whenever you rich the unit-testing phase, or even more, when you need to change the implementation to the new one at a later time.
It really comes down to where you want your code. If you need to have data access for an object you can put it behind an IRepository object or in the controller doesn't matter: you will still wind up with either a series of GetByXXX calls or the equivilent code. Either way you can lazy load and control the lifetime of the connection. So now you need to ask yourself: where do I want my code to live?
Personally, I would argue to get it out of the controller. By that I mean moving it to another layer. Probably using an IRespository type of pattern where you have a series of GetByXXX calls. Sure they are ugly. Wrong? I would argue otherwise. At least they are all contained within the same logical layer together rather than being scattered throughout the controllers where they are mixed in with validation code, etc.