I usually write use cases for all the software that I develop. For each use case I generally write a controller which directs the flow (implements a use case).
I have recently started developing web apps using Asp.net MVC. One of the best practices of Asp.net MVC is to keep very less logic in the controllers. I am not able to figure out how will I change my design to reflect this.
I basically want a way to encapsulate my use cases.
I think having a fat model and skinny controller is generally a good practice in any language and not specifically .NET MVC. Checkout this nice article that goes through a sample scenario showing the advantages of a fat mode in Ruby on Rails (but the ideas apply to any language).
For representing the use-cases in your code, I think a much better place for them is in test-cases rather than the controller.
Push as much business logic to your models and helper classes as possible, and use controllers mainly for handling URL calls and instantiating the relevant models, retrieving data from them, and pushing data to the views. Views and controllers should have as few decisions to make as possible.
Create a business component to encapsulate use cases. For instance if you have a leave management system you would have use cases like apply for a leave, approve a leave request, reject a leave request, etc. For this you can create a business component (class) called Leave Manager with methods (functions/operations) like "Apply", "Approve", "Reject", etc. These methods will encapsulate your use cases. These methods would take your business entities and data store classes as input and execute the use case.
class LeaveManager{
int Apply(from, to);
bool Approve(leaveApplicationId, approverId);
bool Reject(leaveApplicationId, approverId);
}
You can then use this business component in your controllers to execute the use case by supplying the required parameters.
Related
I have decided that my controllers are getting a little to cluttered, and decided to adopt a pipeline-style system as I have used in a WebAPI project. The pipeline consists of actions, that get more and more general, i.e: ViewAccountDetailsAction > AccountAction > AuthenticatedAction > EmptyAction. The Actions both add to the pipleline in order of inheritance, and expose members, or abstract methods for different scenarios.
My problem lies in how to return views from pipeline elements. In the WebAPI example, it was as easy as returning an IHttpActionResult which didn't have to perform any view rendering, however, MVC is required to render its responses differently, with the additional step of Razor.
As Controllers expose internal protected helper methods like View() or RedirectToAction, I cant use these outside the controllers themselves.
Is there an elegant way to render these out? I have seen a few ways to do this, each being either cumbersome, or giving me uncomfortable feelings.
My most favoured way at the moment is to make an internal base class hiding the protected methods, and making them internal, whilst calling the base methods. The controller instance will then be provided to the instantiated action. Is there anything overly wrong with this? I can't think of any abusable cases, but wanted to see if there was any community consensus on the matter.
I would recommend taking your approach a step further.
This took a little bit of research, but I based it on an approach I did for a client with Web API 2. Basically the idea was we created a custom ControllerSelector, ActionSelector and ActionDescriptors and a controller base class that exposed a strongly typed business layer. Then through reflection/custom attributes, we marshalled the call to the business layer, handling transformations to an HttpResponseResponse message generically, including errors.
Controller: http://pastebin.com/iK8ieBKD
ControllerSelector: http://pastebin.com/qvEbggrP
ActionSelector: http://pastebin.com/CEFNeKZZ
The first thing you'll need to do is look at:
http://www.dotnet-tricks.com/Tutorial/mvc/LYHK270114-Detailed-ASP.NET-MVC-Pipeline.html
Unfortunately ASP.NET MVC5's pipeline is much less flexible than Web API 2's. However you can do three things:
Custom Controller Factory: https://msdn.microsoft.com/en-us/library/system.web.mvc.icontrollerfactory(v=vs.118).aspx
Custom ControllerDescriptor
Custom ActionInvoker that interprets the ControllerDescriptor: https://msdn.microsoft.com/en-us/library/system.web.mvc.iactioninvoker(v=vs.118).aspx
This way you leave your controller to do what controllers do best, and create a contract for your controller to generically interpret using your new pipeline. This is really the right way to do it.
High jacking/bastardizing the controller as you suggested I don't think is a great plan, this is a much more robust solution, however it would take significant effort. Best of luck!
In my n-tier .Net Application I got next layers:
Data Access Layer (EF)
Business Layer (Validation & Business Logic)
Presentation Layers (1 MVC Controller and many API Controllers)
I found, that my Business Services only validate business objects, call CRUD DAO methods and return results to Api Controllers.
So, I doubt: may be Web Api Controllers should be used as Business Services?
Interesting, just answered a similar question...
So I woudn't do it I were you.
Here's just a few disadvatages of the approach from the top of my head:
Performance - a redundant HTTP roundtrip in Web MVC project.
Separation of concerns - most of the time the functionality provided
by API differs greatly form UI for the same project/application. You
might want to limit the API to a few methods with a strict contract.
In case you want Web API to be a layer between Web MVC and your DAL
you will have to expose all functionality you need for UI as well.
Also you might want to have different authorization and
authentication mechanisms. Very often API exceptions handling is
also different as well as input validation.
Maintanance - everytime you need to make a change required for UI
only you have to make sure it doesn't brake your API clients. Also
API versioning is a very important topic and mixing it with most UI
changes makes this process even more difficult.
Probably for now you application is not that complex but from the design perspective your solution is much more flexible now than it will be if you decide to put Web API between your UI and DAL layers.
N-Tier applications and multi-layer are not popular among the new wave of developers. Keep in mind, that just because something is not popular, among a group, does not mean that it does not have merit.
Pros of MVC:
Separation of Concerns
Unit Testing
Does a multi-layer MVC application using a Web.API have merit:
I know this will be met with some discontent and disagreement. However, my concern is that single purpose application developers are not giving consideration to enterprise development. Enterprise development, load balancing, manageable code maintenance, and true Separation of Concerns are only possible with multi-layer applications that can easily lend themselves to N-tier.
Many developers are operating in environments that demand that they design and implement data structures in SQL, create and maintain models and CRUD functionality, develop controllers and design good looking and friendly views. The MVC model utilizing Entity Framework makes this a manageable task for these small to moderate platform developers.
In the Enterprise, separating the Business and Data Access layers from the User Interface makes real good sense. Right now MVC is a popular and very functional platform for efficient and usable User Interface development. What will be the UI platform in ten years? Separating the UI from the other layers gives more life to the work spent developing the business logic. Not to mention, that it allows for accessing data on multiple platforms today.
Multi-layer MVC using Web.API has these advantages:
True Separation of Concerns
Supports Unit Testing
Makes the Business logic and Data Access logic more scalable and reusable than MVC alone.
Supports data processes without page refresh. (REST)
CONS:
- Does not easily support use of Entity Framework. (Which is not ready for the enterprise yet anyway)
You could have your code as part as your model, and that would even be considered as good separation of concerns since MVC is build for that.
But what my preferred thing to do is keep logic in a Business Layer of it's own. The reason for that is, I think, better separation of concerns. I like using IoC, so there might be different configurations that I route thought different running solutions. Or, I might have another UI/API/Project that uses the same logic layer.
As for the overhead, it has a little overhead, but I think worth the trouble comparing to the actual overhead in code it creates.
I agree with others here, looking into using strongly typed views, the controllers would only create an instance of the viewmodel and send it on to the view. The view model then is the intermediary to the data services layer. I personally create all my entities using EF in a different project just to separate function. The view model can either call EF directly or you can add another layer of pre-canned EF queries which the Viewmodel uses just to populate the view collections. This would look like this:
[View]-[Controller]-[Viewmodel]-[Optional repository and interface to EF]---[EF]
In the interface to EF you would catch all DB errors when trying to get information and post back to the view according to your design.
The beauty of strongly typed views is that they post back and forth from the View and can contain methods which you can call at will. Because they are a pseudo model for the view, they can also have properties specific to the view which may be used at the business layer. I pass view models around quite a bit were warranted. (After all they are just a pointer)...
You can implement business logic/validation/calucations in the actual model/entity classes rather than ApiControllers, otherwise you will end up with so much code in your controller which is not really a good thing.
Also you can use DataAnnotations Attributes to perform your validation which sits outside of your controller. for.e.g. http://msdn.microsoft.com/en-us/library/ee256141(v=vs.100).aspx
I am pretty new to MVC 4, and I have worked mostly with web forms up to this moment in C#. I understand the pattern of MVC, the routing, calling actions and so on.
But what about the actions which are responsible for fetching data from the database, for example by firing stored procedures? I have seen some tutorials where they put the logic for connecting to the database directly in the actions.
However I am thinking of a more centralized way to do it. For example, I can put all the functions which are firing stored procedures in a separate class named DatabaseCoordinator.cs in a folder named Helpers for example. Then I can call them from the actions in the controllers.
In that way I will know that I can find all of my methods for the database in one class, which is a very clean solution, I think (or at least in web forms). However I want to follow the pattern of MVC, and use only models, views and controllers as the name of the pattern itself implies.
So what is the best practice for that? Should I make a separate class for this, or implement the logic directly in the controllers, or perhaps somewhere else?
You should certainly make a separate repository class to contain all of your data access operations.
There is a good worked example here:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
I recommend that you put your data access code somewhere other than in your controller. The controller's primary purpose is to gather together the information for display on a page or the reverse - to take the data from the page that is posted back and feed it to the code responsible for business rules and data access.
For most MVC projects (heck, for most projects really!) I build separate class library projects - at minimum one for business rules and data access, though typically I'll make those two separate projects. The purpose of separating the logic is really for simpler future maintenance and reusability. If you keep your various logical parts separate, you can easily swap them out if your logic or database needs to change, or you can easily consume the business rules and data from a new type of user interface; for example, if you decided to implement your project as a Windows forms application in addition to your web system, you could (theoretically) just reuse your business logic and data access logic libraries and only rebuild the user layer. However, if you build your logic into your controller, you really can't reuse that logic without extracting it and converting it to the new application model you're using.
So, simply put, definitely keep 99% of your logic and data access out of your controller. Only put what you must put into your controller, the rest in a separate class, or where appropriate, in separate class libraries.
Good luck!
The Controllers and Views tend to stay within the same project, but it's common to split the data access classes and models into their own seperate class library, as this allows other projects to utilise them.
This will allow you, in the future, to maybe add a windows forms/wpf interface or maybe a mobile device interface, leveraging the work you already have in the standalone class library.
Another thing to consider, is looking into how to use ViewModels in your MVC application. It's a common technique when Views require more than one domain object. Using View Models in MVC.
Check out the Unit of Work Pattern (UOW) combined with the Repository Pattern. It doesn't matter if you ultimately call a stored procedure or an inline linq query to return results, your caller shouldn't know or care how GetPersons is ultimately implemented. The UOW pattern combined with the Repository pattern is a very popular way to expose an Entity Framework database in the ASP.NET community. You will find different ways to do it, some are over-kill and some just create dependencies with no actual benefit but you will find a way that feels right to you with those patterns.
After more experience, I would like to change my answer and state that the Repository Pattern and thus the Unit of Work pattern are pointless layers of abstraction to prevent you from working with Entity Framework, which is your data layer abstraction! directly.
Other than being able to swap out databases from say Microsoft SQL PostgreSQL (when would this ever happen in the real world?) and control the structure of complex queries that you don't want repeated in your code, I see no real value to the repository pattern. To include CreatedBy,ModifiedBy values on Insert/Update you need only override EntityFramework. To encapsulate queries that include business rules such as where active = 1 and isdeleted = 0 just extend Linq queries with extension methods.
I am building a hosted business SaaS application using MVC 4/C# 4. I'd need to have customer specific resource files, css, views, and business logic that leverage a base code layer as much as possible. How would each of these (resource files, css, views, logic) need to be structured to accomplish this?
I realize this is probably a very in depth answer...but I have no idea where to start or what to search for to begin to research this. Any pointers so I can research further?
Here are my initial thoughts on each:
Views
Use a Switch statement based on user to return different views.
CSS
Use switch statement in view to specify which css to load
Resource Files
I'm not using them now but need to implement, so not sure exactly how they work. From what I've seen you can specify a resource file at the class MetaData level, which is a compile time thing. Not sure how you would change this at the user level. I can see here, how to change it based on culture...but not by a user profile attribute (like the company they belong to).
This looks like a start...will review more.
Business Logic
In my services layer, I could implement switch statements...but that seems messy. Is there a way to create a new classes that override the base classes but only for certain users? Or putting these in a separate project/dll and only using that dll reference for a certain user?
I used to work on the IBM iSeries, and they had the concept of a library path that could be set by user at login. You'd have a custom code path that overrode the base code path libraries. Is there anything similar in MVC?
Data Localization
In my database, I have a table for Orders and another for OrderStatuses, which may be displayed in a drop down for the user to select a status. These statuses may be 'Open' and 'Closed'. But another customer may want that in Spanish...How would you handle this?
Any other considerations I am missing?
Use switch statement
Any time someone is writing object-oriented code and mentions a switch statement to control variable requirements, alarm lights begin to flash.
When you have similar but different requirements, polymorphism is your friend.
Without knowing full details of your requirements it is difficult to provide a specific answer, but consider using the factory pattern / dependency injection to provide objects appropriate to a specific user (or more probably, to the company associated with a specific user).
UI Layer
Generally speaking you could use a factory to return controller instances, based on a common subclass, that implement requirements for a specific user/customer and return views appropriate to that user.
I'm not well enough versed in the specifics of wiring routes in ASP.Net MVC to suggest how specifically to set that up, but it feels like the right approach. Perhaps another poster can shed more light.
Business Logic
This is a classic use of polymorphism, when requirements vary significantly. Alternatives to per-customer classes include configuration-driven behavior and rules engines. The best choice depends on your specific sitation.
Data Localization
Things like order status in the DB should not be bound to a text like 'Open'. They should be bound to a binary representation (e.g. an INT). Leave it to the View to translate that meaning into something specific to the user's language.
In a SaaS application we have developed we have clients who have their own private domains so being able to support something like that was a must. We had to be able to support:
www.mycompany.com/u/clientname
clientname.mycompany.com
www.clientname.com
On of the things we considered was how we could use a single deployment/code base to handle all of these clients. What we ended up with was a Base system that could be extended through the use of "plugins" which are basically class libraries named "APP.Clients.{ClientName}".
We wrote a custom ViewEngine that allows us to make use of these plugins to load Views, Controllers and even Controller Actions from the clients custom plugin to over-ride the base site.
What we ended up with is similar to what people call "portable areas" or basically external views and controllers in an Assembly.
Clients can share a common "network" database or they can be rolled off in to their own database. Most all of the config comes from reading the current URL and having logic that can determine which "client" it is and loading their settings and processing their customization.
Being able to load the client views required adding in additional search locations for Master Pages, Views and Partial Views (why we have a custom ViewEngine).
There is no simple answer and what works for one SaaS project may not work exactly the same for another. Your architecture will likely be similar but your business needs will dictate where your project takes you!
I think I've hit that "paralysis by analysis" state.
I have an MVC app, using EF as an ORM.
So I'm trying to decide on the best data access pattern, and so far I'm thinking putting all data access logic into controllers is the way to go.. but it kinda doesn't sound right.
Another option is creating an external repository, handling data interactions.
Here's my pros/cons:
If embedding data access to controllers, I will end up with code like this:
using (DbContext db = new DbContext())
{
User user = db.Users.Where(x=>x.Name == "Bob").Single();
user.Address.Street = "some st";
db.SaveChanges();
}
So with this, I get full benefits of lazy loading, I close connection right after I'm done, I'm flexible on where clause - all the niceties.
The con - I'm mixing a bunch of stuff in a single method - data checking, data access, UI interactions.
With Repository, I'm externalizing data access, and in theory can just replace repos if I decide to use ado.net or go with different database.
But, I don't see a good clean way to realize lazy loading, and how to control DbContext/connection life time.
Say, I have IRepository interface with CRUD methods, how would I load a List of addresses that belong to a given user ? Making methods like GetAddressListByUserId looks ugly, wrong,
and will make me to create a bunch of methods that are just as ugly, and make little sense when using ORM.
I'm sure this problem been solved like million times, and hope there's a solution somewhere..
And one more question on repository pattern - how do you deal with objects that are properties ? E.g. User has a list of addresses, how would you retrieve that list ? Create a repository for the address ? With ORM the address object doesn't have to have a reference back to user, nor Id field, with repo - it will have to have all that. More code, more exposed properties..
The approach you choose depends a lot on the type of project you are going to be working with. For small projects where a Rapid Application Development (RAD) approach is required, it might almost be OK to use your EF model directly in the MVC project and have data access in the controllers, but the more the project grows, the more messy it will become and you will start running into more and more problems. In case you want good design and maintainability, there are several different approaches, but in general you can stick to the following:
Keep your controllers and Views clean. Controllers should only control the application flow and not contain data access or even business logic. Views should only be used for presentation - give it a ViewModel and it will present it as Html (no business logic or calculations). A ViewModel per view is a pretty clean way of doing it.
A typical controller action would look like:
public ActionResult UpdateCompany(CompanyViewModel model)
{
if (ModelState.IsValid)
{
Company company = SomeCompanyViewModelHelper.
MapCompanyViewModelToDomainObject(model);
companyService.UpdateCompany(company);
return RedirectToRoute(/* Wherever you go after company is updated */);
}
// Return the same view with highlighted errors
return View(model);
}
Due to the aforementioned reasons, it is good to abstract your data access (testability, ease of switching the data provider or ORM or whatever, etc.). The Repository pattern is a good choice, but here you also get a few implementation options. There's always been a lot of discussion about generic/non-generic repositories, whether or not one should return IQueryables, etc. But eventually it's for you to choose.
Btw, why do you want lazy loading? As a rule, you know exactly what data you require for a specific view, so why would you choose to fetch it in a deferred way, thus making extra database calls, instead of eager loading everything you need in one call? Personally, I think it's okay to have multiple Get methods for fetching objects with or without children. E.g.
public class CompanyRepository
{
Get(int Id);
Get(string name);
GetWithEmployees(int id);
...
}
It might seem a bit overkill and you may choose a different approach, but as long as you have a pattern you follow, maintaining the code is much easier.
Personally I do it this way:
I have an abstract Domain layer, which has methods not just CRUD, but specialized methods, for example UsersManager.Authenticate(), etc. It inside uses data access logic, or data-access layer abstraction (depending on the level of abstraction I need to have).
It is always better to have an abstract dependency at least. Here are some pros of it:
you can replace one implementation with another at a later time.
you can unit test your controller when needed.
As of controller itself, let it have 2 constructors: one with an abstract domain access class (e.g. facade of domain), and another (empty) constructor which chooses the default implementation. This way your controller lives well during web application run-time (calling empty constructor) and during the unit-testing (with mock domain layer injected).
Also, to be able to easily switch to another domain at a later time, be sure to inject the domain creator, instead of domain itself. This way, localizing the domain layer construction to the domain creator, you can switch to another implementation at any time, by just reconstructing the domain creator (by creator I mean some kind of factory).
I hope this helps.
Addition:
I would not recommend having CRUD methods in domain layer, because this will become a nightmare whenever you rich the unit-testing phase, or even more, when you need to change the implementation to the new one at a later time.
It really comes down to where you want your code. If you need to have data access for an object you can put it behind an IRepository object or in the controller doesn't matter: you will still wind up with either a series of GetByXXX calls or the equivilent code. Either way you can lazy load and control the lifetime of the connection. So now you need to ask yourself: where do I want my code to live?
Personally, I would argue to get it out of the controller. By that I mean moving it to another layer. Probably using an IRespository type of pattern where you have a series of GetByXXX calls. Sure they are ugly. Wrong? I would argue otherwise. At least they are all contained within the same logical layer together rather than being scattered throughout the controllers where they are mixed in with validation code, etc.