I have been pondering this problem for a while now and cannot think of an acceptable solution. I have an application that is planned to become very large. Because of this I am trying to make it modular. It is based on MVC4. I have not decided on using a ORM or mapping everything myself. I would like to have the following structure:
----------------------
| Database
----------------------
| Data/Data Access Layer (Class Library) (Objects reside here)
----------------------
| Core MVC Project (User and Session are stored here)
----------------------
| MVC Modules
I want to keep the validation of the UpdatedBy field as close to the database as possible, possibly in the Data/Data Access layer. The problem is I want to store the user in the Session and do the validation in the class library (where there is no Session). I also want to avoid as much as possible passing the user all over the place. Is there a way to store the user in the Session and have the Data Access layer access that info without being passed the user? Anybody have any recommendations on how to do this elegantly?
EDIT: I want to keep validation, and CRUD activities as close to the Data layer as possible where the Core MVC project just calls Save() on an object and the Data layer validates the object, figures out what user modified or created it and saves it to the DB.
EDIT 2: It is imperative that the Data layer have absolutely no dependencies in the MVC layer.
The LastUpdated can easily be implemented with a Trigger on DB Insert/Updates, but the UpdatedBy is a bit trickier.
A key question is "does your business layer require knowledge of who is using it?" If so, then the interfaces can be designed to require that a Username is provided when making actions. If not then you need to make the data accessible from within/behind the business layer, but without being explicitly provided to it (such as with Dependency Injection, or by providing a Context that is availalble throughout).
You could consider creating a seperate audit-trail using ActionFilters around your controller actions, which provides easy access to the Session, and can create a running history of actions your users take. This may or may not correctly 100% to your database records, but does provide a clear history of the actions of the application--which is valuable in its own right.
You could also consider using a Command pattern, whereby the application generates specific commands (e.g. an UpdateWidgetName command) that are enacted on the business/data layer. In some regards this is how MVC already works, but having an explicit Command which captures the user and date is still a useful addition to your business layer.
Also be aware of the shortcomings of keeping this on the record itself. You'll only know who last edited the record--you won't be able to tell specifically what they edited, or who edited it previously. For relatively simple scenarios this is usually sufficient, but it is far from providing actual historical data for the record.
If you really want 100% auditing you should look at the Event Sourcing design pattern, where effectively if an action isn't audited then it didn't happen. It's a very different paradigm from the typical CRUD approach, but is very powerful (albeit more complicated to design initially)
One other note: consider seperating your business and persistence code into two layers. Tying them together makes the business logic tightly coupled to persistence (bad), which will prevent it from being reused. Look into implementing a Repository which is dedicated to persisting and retrieving your business objects. It pays off.
If you use a structure like this in your application, you can define some core interfaces that can be used throughout your application (like ICurrentUserProvider), and then you can implement those interfaces in the parts of your application where they are best implemented, without creating a tight coupling or dependency to that specific part of the application.
When your web project is initialized, it can initialize your DI framework so that your controllers get their dependencies injected into them. That way your controller gets the Business Layer services it needs, and those Business Layer services have the data-layer implementations they need (without actually having a direct dependency on them), and the data access object gets the service that can tell it who the current user is (without depending directly on the MVC layer).
Related
I want to keep my controllers as clean as possible, so I want to put all the business logic inside the service layer, where I want to inject the DB context. It's worth clarifying that I am going to use EF Core.
Knowing that EF Core already implements the repository pattern and unit of work. I would directly inject the DB context into my services. I wouldn't want to create unnecessary abstractions such as a repository layer.
It is a common practice to inject DbContext into the service layer, it is however not instantaneously a GOOD practice on its own.
I want to keep my controllers as clean as possible, so I want to put all the business logic inside the service layer
That statement can be contradictory, you want everything in one place and you want to keep it clean as possible... This is the major driving argument behind implementing your own repository.
Repository and Unit of Work
A key goal of the Repository Pattern is to try and encapsulate data transformation and query logic so that it can be reused and separated from the database design itself
Unit Of Work is concerned with tracking changes and coordinating how and when those changes are committed to the database, perhaps across multiple repositories.
EF DbContext manages a Unit Of Work implementation over a Repository of Business Domain Entities that are mapped to Database Schema Tables. EF therefore represents both UOW and Repo.
When Is EF A good Repo?
Just because EF is A Repo, does not mean it is THE Repo for your solution, EF can be the perfect repo for the business domain logic has direct access to it, disconnected service architectures can however get in the way of this, unless your entire business logic is encapsulated in the service layer, so every button click and decision point in your client/UI is mapped back to the API, then there some of the business logic spills over into the client side, so an EF based service layer requires a lot of plumbing if you were to expose all of the functionality that the client needs in a meaningful way.
If you end up mapping all or the majority of EF model classes into DTOs or have an entirely different business model that you want to expose to clients, then to do all of this in your API Controller classes can easily become a huge management and maintenance issue, with APIs Controllers you really need to separate the routing and De/Serialization of requests and responses from the actual logic, using another Repo to encapsulate the logic implementation from the transport (the controllers) is a good thing, this usually means that you would NOT inject the DbContext, unless it was simply to pass through to the Repo for that controller.
If the EF Model is not being exposed by the controller, then it is better to avoid injecting the DbContext into the controller, as it will encourage you to do too much in the controller itself.
Lets consider when it IS a good practice to inject the DbContext into the service layer:
In a tiered solution with a Service Layer, if the EF Model represents the main DTOs that will be transferred between the Service and the Client, then EF is a good choice to use in the controllers directly. In fact if that is your goal, then you should consider OData as a service framework as it does this and provides an additional layer of abstraction (the OData Model) that is initially mapped to the EF model but you can easily extend elements to implement custom business logic or structures with theadded benefit of exposing a convention based standard interface for querying data from the clients.
OData basically maps HTTP queries to deferred EF queries, which are in turn mapped to SQL queries (or whatever your chosen backend is). If you use the Simple or Connected OData Client frameworks then Client-side linq statements can be passed through (albeit indirectly) to the database for execution.
When your EF Model represents the bulk of the DTOs exposed from the service and consumed by the clients, then it is a standard practise to inject the DbContext into the Controller definitions, OData is an Implementation that does this with minimal effort and provides a client-side implementation to manage UOW on the client as well!
Why do I need another abstraction layer
As mentioned above, due to the disconnected nature of things, the service layer almost always ends up forming its own abstraction layer, whether you choose to identify or implement them or not, the service layer imposes security and structure constraints on the calls to our business logic. Sometimes we transform data on the service side for efficency or reduction in bandwith, other times to deliberately hide or withold confidential or process critical data, or prevent the client from updating certain fields.
There is also the question of protocols, most modern APIs even add Content Negotiation such that the service can be consumed by different formats as specified by the client. Your controller code will get extremely heavy and dare I say dirty whne you start to tweak some of these factors.
You can gain a great deal of flexibility and interoperability from creating your own repo to manage these interactions, to separate transport and bindings from your business logic.
I wouldn't want to create unnecessary abstractions such as a repository layer.
In a small project or with a very small team it may seem unnecessary, but many of the benefits to creating your own repo will be realized later, either when on-boarding new developers, extending your project to more or new types of clients or perhaps importantly, when the underlying frameworks or operating systems change and you need to adapt to them.
Another abstraction layer shouldn't mean that you are introducing another performance bottleneck, in most cases there are ways to abstract the business logic in a way that either improves the throughput or is effectively pass through. Any performance loss, if observed should either be fixed or it should be easily justified in the net gains to the user or SDLC.
With service based architecture it is normal to see these abstractions:
Database
Data Model
ORM
Business Logic
API Transport (Server)
(HTTP)
API Transport (Client)
ViewModel
View
If you utilise SQL Server, EF and OData, then the only elements you need to explicitly code yourself are:
Data Model
Business Logic
(HTTP)
ViewModel
View
In the end, it is not a question of should the DbContext be injected into the controller, its more of a question about why? Its the Business Logic side of things that matters, this usually requires your DbContext, if you choose to merge your business logic with the API controllers, then you are still managing the business logic, its just harder to identify the boundaries between what is transport related and what is actual business logic.
If the business logic needs to be a separately defined repository or can be an extension of the DbContext is up to you, It will depend greatly as reasoned above on whether you expose the EF Data Model objects through to the client at all, or if all interactions need to be transformed to and from DTO structures. If its ALL DTOs and Zero EF model being exposed, then this falls 100% into the realm of a Repo. Your controllers can be the repo implementation, but everything said before this suggests that is not the best idea for the long term stability of your development team or the solution itself.
As an example, I often write my business logic as extension methods to the EF Model. I use the EF as the repo in this case, this allows many other server side processes to access the business logic without having to go through the HTTP activation pipeline when it makes sense to do so.
T4 templates are used to generate the service layer including OData Controllers from the EF model and the extension methods that are specifically decorated with attributes to identify those moethds that should be available to external clients.
In this implementation, the DbContext is injected to the controllers, because that is the entry point to the business logic!
The client-side projects use the OData Connected Service to generate clien-side proxies providing a strongly typed and linq enabled context that is similar to the DbContext but with the business logic carefully constrained within it.
I am currently developing a Windows form application, that I plan to run on a cloud setup, the application will calculate new data, update within the database and act as sort of control panel for a live data feed RestFul API that I wish to create using ASP.NET MVC 5 Web API.
I am wondering is it viable to connect these 2 separate applications to a single database? It is unlikely that I'd have database entry clash issues as each application has a separate task of reading or writing data for certain tables.
If viable would that mean every-time i make table changes I'd have to update both Entity Framework database models? (Not a major chore).
Is there a better solution to this? Should I scrap the idea of running a Windows Form application to control certain elements of the backend of the public API?
What would be the future issues with designing something like this, if any?
So you have a bunch of options there, assuming you have a layered architecture:
Share your DB, DAL and also Business Layer
Extend your WEB API and utilize it in your WinForms
Reuse DAL only (not the best approach, as business systems are not only data, but also - behavior - which resides in Business Layer)
Share the DB only - this is the worst option, with numerous drawbacks
See options 1 and 2 on an image:
Create a Data access layer, as a seperate component.
like a DAL.dll
Each application has a Logic layer, where "whatever you do" is handled.
Each layer, now uses a sort of Interfacelayer, that will translate objects from either layer of your applications, to the objects of the DAL.
When you change the DB now - you merely have to update the interface layer.
(Of course if you are adding more features, you will have to update all layers, but that isn't really any different.
I suggest this appoach, as it will make your debugging task much easier. And the slight extra code overhead won't affect performance, unless you have a massive communication requirement.
If you want more specifics, I would need examples of a classes from either program, and your SQL table design.
But there is nothing wrong with your approach.
I'm working on a project which has basically three layers: Presentation, business and data.
Each layer is on a different project and all layers use DTO's defined in another project.
business layer and data layer return DTO's or Lists of DTOs when querying the database.
So far so good, but now we have to query views and those views of course do not match an existant DTO. What we have done until now is just create a special DTO, business- and data-layer classes so they were treated like normal entities (minus insert, update etc.)
But it does not seem correct. Why should the be treated like normal entities when they are clearly not. Well the DTO seems necessary, but creating "business logic" and a datalayer-class for every view seems rather akward. So I thought I create one generic business- and datalayer class which holds the logic/code for all views (I still would have to create a DTO for every different view, perhaps I could use anonymous types)
What do you think about me idea or how would you solve this issue?
EDIT: 9. August 2011
Sorry, the post may have been unclear.
By views I meant Views from a sql-server.
I feel your pain completely. The fact is that in almost every non trivial project with decent complexity you will get to the point where the things you have to show to the users on UI overlap, aggregate or are simply a subset of data of business entities. The way I tend to approach this is to accept this fact and go even further - separate the query side from the business logic side both logically and physically. The fact is that you need your entities only for actual business operations and keeping the business constraints valid, and when does this happen? Only when someone changes the data. So there is no need to even build entities when you display the data.
The way I like to structure the solutions is:
User opens the view -> Query is performed only to get the specific
data for the view -> Returned data is the model (although you could
call it a DTO as well, in this case it's the same thing)
User changes something -> Controller (or service) builds the full entity from repo,
business logic action is performed on the entity -> changes are
persisted -> result is returned
What I want to say is, it is ok to treat your read side separately from write side. It is ok to have different infrastructure for this as well. When you start to treat it differently, you will see the benefits - for example you can tailor you queries to what you need on UI.
You can even get to the point where your infrastructure will allow to build your queries with different techniques, for example using LINQ or plain SQL queries - what is best for certain scenarios.
I would advise against using DTOs between layers. I'm not convinced that there's any benefit, but I'll be happy to take instruction if you think you have some.
The harm comes in maintaining multiple parallel hierarchies that express the same idea (business objects plus multiple DTOs between layers). It means lots more code to maintain and greater possibility of errors.
Here's how I'd layer applications:
view <--- controller <--- service <--- + <--- model
+ <--- persistence
This design decouples views from services; you can reuse services with different views. The service methods implement use cases, validate inputs according to business rules, own units of work and transactions, and collaborate with model and persistence objects to fulfill requests.
Controller and view are tightly coupled; change the view, change the controller. The view does nothing other than render data provided by the controller. The controller is responsible for validation, binding, choosing the appropriate services, making response data available, and routing to the next view.
Cross cutting concerns such as logging, transactions, security, etc. are applied at the appropriate layer (usually the services).
Services and persistence should be interface-based.
I've dropped most layered architectures like this as they are a pain to manage all the transformations and are over-complicated. It's typical astronaut architecture. I've been using the following:
View models for forms/views in ASP.Net MVC. This is an important decoupling step. The UI will evolve separately to the model typically.
No service layer, instead replacing it with "command handlers" (mutating operations) and "finders" (query operations) which represent small operations and queries respectively (CQS - Command Query Separation).
Model persistence with NHibernate and ALL domain logic inside the model.
Any external services talk to the finders and command handlers as well
This leads to a very flat manageable architecture with low coupling and all these problems go away.
The past few days I have done a lot of research using the DAL/BLL/UI approach without a very clear understanding of how it will apply to my project. In the past, I have left out the BLL connecting my UI directly to the Data Access Layer (LINQtoSQL dbml). But, I do not think this is a good idea where I work now(or maybe even in the past) because we have a lot of different applications and I'd like to use the same DAL/BLL as they are built.
My question is, how does the BLL help me when, in most of my applications, all I really do is use the LinqtoSqlDataSource/GridView to connect to my datacontext to take care of all the updating/edit, etc. Also, each new web application will, at some level, require unique changes to the DAL/BLL to get the require data, possibly affecting other apps using the same DAL/BLL. Is this reuse of the DAL/BLL the right way of doing this or am I missing something?
I think the BLL comes in when I need to build, for example, a security classes for the various web applications that will be built. But, when I use the Linqtosqldatasource, why would I bother to connect it to the BLL?
DAL
LinqToSQL dbml DataContext.
Does using LinqToSQL change how I should use this design?
BLL
Security for various website used by company.
Query DAL return what(?) when using LinqToSQLDatSource., functions that handle various results sets(I am really unsure how this should work with BLL, sorry if the question is unclear)
UI
Reference only the BLL?
The DAL and BLL are separated by one often subtle, but key difference; business logic. Sounds moronically simple, but let me explain further because the distinctions can be VERY fine and yet impact architecture in huge ways.
Using Linq2SQL, the framework generates very simple objects that will each represent one record in one table. These objects are DTOs; they are lightwight, POCO (Plain Ol' CLR Object) classes that have only fields. The Linq2SQL framework knows how to instantiate and hydrate these objects from DB data, and similarly it can digest the data contained in one into SQLDML that creates or updates the DB record. However, few or none of the rules governing the relationship between fields of various objects are known at this level.
Your actual domain model should be smarter than this; at least smart enough to know that a property on an Order object named SubTotal should be equal to the sum of all ExtendedCosts of all OrderLines, and similarly, ExtendedCost should be the product of the UnitPrice and the Quantity. In many modern programs, your domain forms part of your BLL, at least to this extent. The objects created by Linq2SQL probably shouldn't have to know all this, especially if you aren't persisting SubTotal or ExtendedCost. If you rely on the Linq2SQL DTOs, you've basically tied yourself to what's called an Anemic Domain Model, which is a known anti-pattern. If the domain object can't keep itself internally consistent at least, then any object that works with the domain object must be trusted to keep it that way, requiring all those objects to know rules they shouldn't have to.
The UI should know about the domain, or if you prefer it should know some abstracted way to get the data from the domain for read-write purposes (generally encapsulated in objects called Controllers, which work with the domain layer and/or Linq2SQL). The UI should NOT have to know about the DB in any program of moderate size or larger; either the domain objects can hydrate themselves with a reference to objects in the DAL, or they are produced by custom objects in the DAL that you create to do the hydration, which are then given to the controller. The connected ADO model and interop with GridViews is admirable, but it doesn't allow for abstraction. Say you wanted to insert a web service layer in between the domain and UI, to allow the UI to be located on a mobile app that worked with data in your warehouse. You'd have to rebuild your UI, because you can no longer get objects from Linq2SQL directly; you get them from the web services. If you had a Controller layer that talked to Linq2SQL, you could replace that layer with controllers that talked to the web services. It sounds like a minor difference; you always have to change something. But, now you're using EXACTLY the same UI on the mobile and desktop apps, so changes at THAT layer don't have to be made twice just because the two layers get data different ways.
This is a great question that I have been mulling over with our catalog app for a year now. A specific instance for me might help you with the pattern.
I have a page to display the contents of a shopping cart. In the 'early days' this page had a grid populated by the results of a SQL stored procedure that, given the order number, listed the items in the cart.
Now I have a 'cart' BLL object which contains a collection of 'row' objects. The grid is the same, but the data source is the cart's rows.
Why did I do this? Initially, not becuase of any fancy design patterns. I had so many special cases to handle based on fields in each row AND I had other places I needed to show the same cart-content data, it just made more sense to build the objects. Now a cart loads from a repository and my pages have no idea what that repository does. Heck, for testing, it's hard-coded cart data.
The cart then uses a repository to load up the rows. Each row has logic to maniuplate itself, not knowing where the data came from.
Hopefully that helps?
Should the model just be data structures? Where do the services (data access, business logic) sit in MVC?
Lets assume I have a view that shows a list of customer orders. I have a controller class that handles the clicks on the view controls (buttons, etc).
Should the controller kick off the data access code? Think button click, reload order query. Or should this go through the model layer at all?
Any example code would be great!
Generally I implement MVC as follows:
View - Receives data from the controller and generates output. Generally only display logic should appear here. For example, if you wanted to take an existing site and produce a mobile/iPhone version of it, you should be able to do that just by replacing the views (assuming you wanted the same functionality).
Model - Wrap access to data in models. In my apps, all SQL lives in the Model layer, no direct data access is allowed in the Views or Controllers. As Elie points out in another answer, the idea here is to (at least partially) insulate your Controllers/Views from changes in database structure. Models are also a good place to implement logic such as updating a "last modified" date whenever a field changes. If a major data source for your application is an external web service, consider whether wrapping that in a model class.
Controllers - Used to glue Models and Views together. Implement application logic here, validate forms, transfer data from models to views, etc.
For example, in my apps, when a page is requested, the controller will fetch whatever data is required from the models, and pass it to a view to generate the page the user sees. If that page was a form, the form may then be submitted, the controller handles validation, creates the necessary model and uses it to save the data.
If you follow this method, Models end up being quite generic and reusable. Your controllers are of a manageable size and complexity because data access and display has been removed to Models and Views respectively, and your views should be simple enough that a designer (with a little training) could comprehend them.
I wouldn't put Data Access Code in the Controller.
To build on what has already been said, it's important to think of layering WITHIN the layers. For example, you will likely have multiple layers within the Model itself - a Data Access Layer which performs any ORM and Database access and a more abstract layer which represents the Business Objects (without any knowledge of HOW to access their data).
This will allow you to test the components of your system more easily, as it supports mocking.
I like to keep the "contracts", or interfaces, for model persistence or service access in the domain (model) layer. I put implementations of data access or service calls in another layer.
The controllers are instantiated with constructors that take interfaces for the services, e.g. ISomeService, as parameters. The controllers themselves don't know how the service layers are implemented, but they can access them. Then I can easily substitute SqlSomeService or InMemorySomeService.
I've also been fairly happy with a concrete service implementation that takes a domain (model) layer repository as a parameter to its constructor.. For example: ICatalogRepository with SqlServerCatalogRepositry : ICatalogRepository is handed to CatalogService(ICatalogRepository, ISomeOtherDependency).
This kind of separation is easier with dependency injection frameworks.
The View would relay what should happen on a click in the UI to the Control layer, which would contain ALL business logic, and would in turn call the Model layer which would only make database calls. Only the model layer should be making database calls, or you will defeat the purpose of the MVC design pattern.
Think about it this way. Let's say you change your database. You would want to limit the amount of code change required, and keep all those changes together without affecting other pieces of your application. So by keeping all data access in the Model layer, even the simple calls, you limit any changes required to the Model layer. If you were to bypass the Model layer for any reason, you would now have to extend any changes needed to any code that knows about the database, making such maintenance more complex than it should be.