Automapper vs Dapper for mapping - c#

This question is to verify if the current implementation is the right way to go about in terms of best practices and performance. So far in all my previous companies I have been using Auto Mapper to map relational objects to domain model entities and domain model entities to Dtos. The ORM tools have been Entity framework.
In my current company they are using Dapper as ORM tool and do not use AutoMapper as they say Dapper does the mapping for you internally. So the way they have structured the project is create a separate class library project that contains Dtos and reference the Dtos in Dataccess and Business layer.
The query returned by Dapper is internally mapped to the Dtos. These Dtos are returned to the Business layer and so on.
For example
In the code below the Participant function is Dto.
Repository file in DataAccess layer
public List<ParticipantFunction> GetParticipantFunctions(int workflowId)
{
// Update the Action for Participant
string selectSql = #"SELECT [WFPatFunc_ID] AS WFPatFuncID
,[WFFunction]
,[SubIndustryID]
,[DepartmentID]
FROM [dbo].[WF_ParticipantsFunctions]
WHERE [DepartmentID] = (SELECT TOP 1 [DepartmentID] FROM [dbo].[WF] WHERE [WF_ID] = #workflowId)";
return _unitOfWork.GetConnection().Query<ParticipantFunction>(selectSql, new
{
workflowId = workflowId
}).ToList();
}
The reason what I have been told by the developers is that AutoMapper would be just an overhead and reduce speed and since Dapper does mapping internally, there is no need for it.
I would like to know if the practice they are following is fine and have no issues.

There is no right or wrong here. If the current system works and solves all their requirements, then great: use that! If you have an actual need for something where auto-mapper would be useful, then great: use that!
But: if you don't have a need for the thing that auto-mapper does (and it appears that they do not), then... don't use that?
Perhaps one key point / question is: what is your ability to refactor the code if your requirements change later. For many people, the answer there is "sure, we can change stuff" - so in that case I would say: defer on adding an additional layer until you actually have a requirement for an additional layer.
If you will absolutely not be able to change the code later, perhaps due to lots of public-facing APIs (software as a product), then it makes sense to de-couple everything now so there is no coupling / dependency in the public API. But: most people don't have that. Besides which, dapper makes no demands on your type model whatseover other than: it must look kinda like the tables. If it does that, then again: why add an additional layer if you don't need it?

This is more of an architecture problem and there is no good or bad.
Pros of DTOs:
Layering - You are not directly using data object so you can use Attributes for mapping and stuff not needed in your UI. That way you can have the DTOs in a library that has no dependencies to your data access stuff.(Note you could do this with fluent mapping but this way you can use what you like)
Modification - If you domain model changes your public interface will stay the same. Lets say you add a property to the model all the stuff you already build wont be getting the new field in you JSON for no reason.
Security - This is why Microsoft started pushing DTOs if I remember correctly I think CodePlex(Not 100% sure it was them) was using EF entitles directly to add stuff to the database. Someone figure this out and just expanded the JSON with stuff that he was not allowed to access, for example if you have a post that has a reference to a user you could change the role of the user by adding a new post because of change tracking. There are ways to protect you self from this but security should always be an opt-out not opt-in.
I like to use DTOs when I need to expose BI level to a public interface. For example if I have an API that has System operations like api/Users/AllUsers or api/Users/GetFilteredUsers.
No DTOs
Performance - Generally not using DTOs will run faster. No extra step of mapping. Projections help with this but you can really optimize when you know what you need to do.
Speed of development and smaller code base - Sometime a large architecture is an overkill and you just want to get things done. And you are not just doing copy paste of your properties for most of you time.
More flexibility - This is opposite to the security sometimes you want to use the same api to do more then one thing. For example if you want to have the UI decide what it wants to see from a big object. Like select and expand. (Note this can be done with DTOs but if you ever tried to do expand with DTOs you know how tricky it can get)
I use it when I need to expose the data access level to the client, if you need to use Breeze or JayData and/or Odata. With api like api/Users and api/Users?filter=(a=>a.Value > 5)

Related

Abstracting Data Access Layer with Entity Framework code first

I have two separate databases. Database A is small and contains 6 tables and some stored procedures, database B is larger and contains 52 tables, 159 views, and some stored procedures. I have been tasked with creating an application that does operations on both of these databases. There is already code that has been written that accesses these databases but it is not testable and I would like to create the DAL code from scratch. So I made a code first Entity Framework connection and let it create all of the POCOs for me as well as the Context classes. Now I have 3 DatabaseContext classes. 1 for database A which is just the 6 separate tables. 1 for the tables from database B and 1 for the views from database B. I split the Context class for database B into 2 separate context classes because the tables and the views are used in 2 different scenarios.
I have created 2 projects in a single solution. DomainModel and DataAccessLayer. The DomainModel project holds all of the POCOs which the Entity Framework code first generated for me. DataAccessLayer is where the DatabaseContexts are and where I would like to create the code which other projects can use to access the database. My goal being to make a reusable and testable data access layer code.
The QUESTION is: What do I create in the DataAccessLayer project to make the DAL resusable in other projects that want to be able to talk to the database in an abstract way and what is the best way to structure it?
What I have done:
In my search on the internet I have seen that people recommend the repository pattern (sometimes with UnitOfWork) and it seems like that is the way to go, so I created an interface called IGenericRepository and an implementation called GenericEntityFrameworkRepository. The interface has methods such as GetByID, Insert, Delete, and Update. The implementation takes a DbContext as a parameter in the constructor and implements all of the methods. The next step might be to create specific implementations for the different tables and views but that is going to be a very tedious thing to do since there are so many tables and views.
I have also read that Entity Framework IS the DAL. Then it might be enough to just create another project that holds the business logic and returns IEnumerables. But that seems like it will be hard to test.
What I am trying to achieve is a well structured and testable foundation to access our database that can be tested thoroughly and expanded as other projects start to require other functionality to the database.
The folder structure for the project can be seen in the following picture:
Design and architecture is all about trade-offs. Be careful when you want to consider abstracting your data access for purposes of reuse. Within an application this leads to overly complex and poorly performing solutions to what can be a very simple and fast solution.
For instance, if I want to build a business domain layer as a set of services to be consumed by a number of different applications (web applications and services/APIs that will serve mobile apps or third party interests) then that domain layer becomes a boundary that defines what information will come in, and what information will go out. Everything in and out of that abstraction should be DTOs that reflect the domain that I am making available via the actions I make available. Within that boundary, EF should be leveraged to perform the projection needed to produce the relevant DTOs, or update entities based on the specific details passed in. (After verification) The trade-off is that every consumer will need to make due with this common access point, so there is little wiggle room to customize what comes out unless you build it in. This is preferable to having everything just do their own thing accessing the database and running custom queries or calling Stored Procedures.
What I see most often though is that within a single solution, developers want to abstract the system not to "know" that the data domain is provided by EF. The application passes around POCO Entities or DTOs and they implement things like Generic Repositories with GetById, GetAll, Update, Upsert, etc. This is a very, very poor design decision because it means that your data domain cannot take advantage of things like projection, pagination, custom filtering, ordering, eager loading related data, etc. Abstracting the EF DbContext is certainly a valid objective for enabling unit testing, but I strongly recommend avoiding the performance pitfall of a Generic Repository pattern. IMO attempting to abstract away EF just for the sake of hiding it or making it swappable is either a poor performing and/or an overly complex mess.
Take for example a typical GetAll() method.
public IEnumerable<T> GetAll()
{
return _context.Set<T>().ToList();
}
What happens if you only want orders from the last 3 months?
var orders = orderRepository.GetAll()
.Where(x => x.OrderDate >= startDate).ToList();
The issue here is that GetAll() will always return all rows. If you have 4 million orders you can appreciate that is not desirable.
You could make an OrderRepository that extends Repository to implement a method like GetOrdersAfter(DateTime) but soon, what is the point of the Generic Repository? The next option would be to pass something like an Expression<Func<TEntity>> Where clause into the GetAll() method. There are plenty of examples of that out there. However, doing that is leaking EF-isms into the consuming code. The Where clause expression has to conform to EF rules. For instance passing order => order.SomeUnmappedProperty == someValue would cause the query to fail as EF won't be able to convert that down to SQL. Even if you're Ok with that the GetAll method will need to start looking like:
IEnumerable<TEntity> GetAll(Expression<Func<TEntity>> whereClause = null,
IEnumerable<OrderByClause> orderByClauses = null,
IEnumerable<string> includes = null,
int? pageNumber = null, int? pageSize = null )
or some similar monstrosity to handle filtering, ordering, eager loading, and pagination, and this doesn't even cover projection. For that you end up with additional methods like:
IEnumerable<TEntity, TDTO> GetAll(Expression<Func<TEntity>> whereClause = null,
IEnumerable<OrderByClause> orderByClauses = null,
IEnumerable<string> includes = null,
int? pageNumber = null, int? pageSize = null )
which will call the other GetAll then perform a Mapper.Map to return a desired DTO/ViewModel that is hopefully registered in a mapping dependency. Then you need to consider async vs. synchronous flavours of the methods. (or forcing all calls to be synchronous or async)
Overall my advice would be to avoid Generic Repository patterns and don't abstract away EF just for the sake of abstraction, instead look for ways to leverage the most you can out of it. Most of the problems developers run into with performance and odd behaviour with EF stem from efforts to abstract it. They are so worried about needing to abstract it to be able to replace it if they need to, that they end up with those performance and complexity problems entirely because of the limitations imposed by the abstraction. (A self-fulfilling prophecy) "Architecture" can be a dirty word when you try to prematurely optimize a solution to solve a problem you don't currently, and probably will never face. Always keep YAGNI and KISS at the forefront of all design considerations.
I guess you tumbled between selecting right architecture for a given solution. a software must be created for meet the business needs. so depending on the business, complexity of the domain, we have to choose the write architecture.
Microsoft elaborated this in a detail PDF you will find it here
https://learn.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/
You want some level of abstraction between your Data and the BL then "N-Layer" architecture is not the best solution. even though i am not telling the N-Layer entirely out dated but there is more suitable solutions for this problem .
And if you using EF or EF core then there is no need to implement Repository cus EF itself contain UOW and Repositories.
Clean Architecture introduce higher level of abstraction between layers. it abstract away your data layer and make it reusable over different project or different solutions. Clean architecture is a big topic i can't describe here in more details. but i learn the subject from him.
Jason Taylor- Clean Architecture
https://www.youtube.com/watch?v=dK4Yb6-LxAk
https://github.com/jasontaylordev/CleanArchitecture

Using View-Models with Repository pattern

I'm using Domain driven N-layered application architecture with EF code first in my recent project, I defined my Repository contracts, In Domain layer.
A basic contract to make other Repositories less verbose:
public interface IRepository<TEntity, in TKey> where TEntity : class
{
TEntity GetById(TKey id);
void Create(TEntity entity);
void Update(TEntity entity);
void Delete(TEntity entity);
}
And specialized Repositories per each Aggregation root, e.g:
public interface IOrderRepository : IRepository<Order, int>
{
IEnumerable<Order> FindAllOrders();
IEnumerable<Order> Find(string text);
//other methods that return Order aggregation root
}
As you see, all of these methods depend on Domain entities.
But in some cases, an application's UI, needs some data that isn't Entity, that data may made from two or more enteritis's data(View-Models), in these cases, I define the View-Models in Application layer, because they are closely depend on an Application's needs and not to the Domain.
So, I think I have 2 way's to show data as View-Models in the UI:
Leave the specialized Repository depends on Entities only, and map the results of Repositories's method to View-Models when I want to show to user(in Application Layer usually).
Add some methods to my specialized Repositories that return their results as View-Models directly, and use these returned values, in Application Layer and then UI(these specialized Repositories's contracts that I call them Readonly Repository Contracts, put in Application Layer unlike the other Repositories'e contract that put in Domain).
Suppose, my UI needs a View-Model with 3 or 4 properties(from 3 or 4 big Entities).
It's data could be generate with simple projection, but in case 1, because my methods could not access to View-Models, I have to fetch all the fields of all 3 or 4 tables with sometimes, huge joins, and then map the results to View-Models.
But, in case 2, I could simply use projection and fill the View-Models directly.
So, I think in performance point of view, the case 2 is better than case 1. but I read that Repository should depend on Entities and not View-Models in design point of view.
Is there any better way that does not cause the Domain Layer depend on the Application layer, and also doesn't hit the performance? or is it acceptable that for reading queries, my Repositories depend on View-Models?(case2)
Perhaps using the command-query separation (at the application level) might help a bit.
You should make your repositories dependent on entities only, and keep only the trivial retrieve method - that is, GetOrderById() - on your repository (along with create / update / merge / delete, of course). Imagine that the entities, the repositories, the domain services, the user-interface commands, the application services that handles those commands (for example, a certain web controller that handles POST requests in a web application etc.) represents your write model, the write-side of your application.
Then build a separate read model that could be as dirty as you wish - put there the joins of 5 tables, the code that reads from a file the number of stars in the Universe, multiplies it with the number of books starting with A (after doing a query against Amazon) and builds up a n-dimensional structure that etc. - you get the idea :) But, on the read-model, do not add any code that deals with modifying your entities. You are free to return any View-Models you want from this read model, but do trigger any data changes from here.
The separation of reads and writes should decrease the complexity of the program and make everything a bit more manageable. And you may also see that it won't break the design rules you have mentioned in your question (hopefully).
From a performance point of view, using a read model, that is, writing the code that reads data separately from the code that writes / changes data is as best as you can get :) This is because you can even mangle some SQL code there without sleeping bad at night - and SQL queries, if written well, will give your application a considerable speed boost.
Nota bene: I was joking a bit on what and how you can code your read side - the read-side code should be as clean and simple as the write-side code, of course :)
Furthermore, you may get rid of the generic repository interface if you want, as it just clutters the domain you are modeling and forces every concrete repository to expose methods that are not necessary :) See this. For example, it is highly probable that the Delete() method would never be used for the OrderRepository - as, perhaps, Orders should never be deleted (of course, as always, it depends). Of course you can keep the database-row-managing primitives in a single module and reuse those primitives in your concrete repositories, but to not expose those primitives to anyone else but the implementation of the repositories - simply because they are not needed anywhere else and may confuse a drunk programmer if publicly exposed.
Finally, perhaps it would be also beneficial to not think about Domain Layer, Application Layer, Data Layer or View Models Layer in a too strict manner. Please read this. Packaging your software modules by their real-world meaning / purpose (or feature) is a bit better than packaging them based on an unnatural, hard-to-understand, hard-to-explain-to-a-5-old-kid criterion, that is, packaging them by layer.
Well, to me I would map the ViewModel into Model objects and use those in my repositories for reading / writing purposes, as you may know there are several tools that you can do for this, in my personal case I use automapper which I found very easy to implement.
try to keep the dependency between the Web layer and Repository layers as separate as possible saying that repos should talk only to the model and your web layer should talk to your view models.
An option could be that you can use DTO's in a service and automap those objects in the web layer (it could be the case to be a one to one mapping) the disadvantage is that you may end up with a lot of boilerplate code and the dtos and view models could feel duplicated.
the other option is to return partial hydrated objects in your model and expose those objects as DTOs and map those objects to your view models this solution could be a little bit obscure but you can make the projections you want and return only the information that you need.
you can get rid of the of the view models and expose the dtos in your web layer and use them as view models, less code but more coupled approach.
I Kindof agree with Pedro here. using a application service layer could be benificial. if your aiming for an MVVM type of implementation i would advise to create a Model class that is responsible for holding the data that is retrieved using the service layer. Mapping the data with automapper is a really good idea if your entities, DTO's and models are named consistently (so you don't have to write a lot of manual mappings).
In my experience using your entities/poco in viewmodels to display data will result in big balls of mud. Different views have different needs en will all add a need to add more properties to an entity. Slowly making your queries more complex and slower.
if your data is not changing that often you might want to consider introducing (sql/database) views that will transfer some of the heavy lifting to the database (where it is highly optimized). EF handles database views fairly well. Then retrieving the entity and mapping the data (from the views) to the model or DTO becomes fairly straightforward.

EF with POCO + WCF + WPF. Reuse POCO classes on client or use DTOs?

We are developing a 3-tier application with a WPF client, which communicates through WCF with the BLL. We use EF to access our database.
We have been using the default EntityObject code generator of EF, but had lots of problems and serialization issues when sending those object through the wire, and when processing and reattaching them in the BLL.
We are about to switch to the POCO template, and rewrite the data access and the communication parts of our app (we are hoping to have a cleaner architecture and less "magic code" that way.
My question is whether it is a good idea to reuse the POCO classes on the client side? Or should we create separate DTO classes? Even if they would be identical to the POCO entity classes? What are the pros/cons of the two approaches?
Definitely use DTOs + AutoMapper. Otherwise you'll have tons of problems with DataContractSerializer while using WCF due to circular dependencies (especially problematic with navigational properties). Even though you may omit DTOs initially, you'll be forced to used them later on due to problems mentioned above. So I would advice using proper DTOs for each tier.
Also your tier specific models will carry different attributes. You may also need to modify (i.e. specialize) the data that you carry up in each tier. So if your project is large enough (or have the prospect to be so), use DTOs with proper naming and place them in proper locations (i.e. not all in the same assembly).
Now i work on similar issue. I have done next:
Create next assemplities:
SF.Contracts - that just defined ServiceCotnracts and DataContracts. Obvious all datacontracts may be used like POCO classes in EF (but i dont use t4 or other generator - all POCO classes and DataContext are written manualy, because i need to use very bad database).
SF.
SF.DataAccessObjects - in this assemlity i implement my edmx and DataContext.
SF.Services - implementation of WCF Services.
So, a large numbers of select WCF method have next signature and implementation:
public List<Vulner> VulnerSelect(int[] idList = null, string[] navigationPropertiesList = null)
{
var query = from vulner in _businessModel.DataModel.VulnerSet
select vulner;
if (navigationPropertiesList != null)
navigationPropertiesList.Select(p =>{query = ((ObjectQuery<Vulner>)query).Include(p);
return true; });
if (idList != null)
query = query.Where(p => idList.Contains(p.Id));
return query.ToList();
}
and you can use this method like this:
WCFproxy.VulnerSelect(new[]{1,2,3},new[]{"SecurityObjects", "SecurityObjrcts.Problem"});
so that, you have no problem with serrialization, navigation properties etc. and you can clearly indicate which NavigationProperties must be load.
p.s.: sory for my bad English :)
I'd say use DTOs.
Circular dependencies and large object graphs can be a serious problem causing either errors or far too much serialised traffic. There's just way too much noise on an ORM controlled object to send it down the line.
I use a service layer to access my domain objects and use LINQ extensively, but I always return DTO objects back to the client.

nHibernate, an n-Tier solution + request for advice

I'm getting the chance to develop a mildly complex project and have been investigating the various approaches that I can use to tackle this project. Typically I would have ran with the traditional 3-Tier approach but after spending some time looking around at various options I've got an inkling that some kind of ORM might be a better fit and I'm considering nHibernate. However, I'm looking for some guidance on implementing nHibernate and more specifically how I would structure my BL and DAL in conjunction with nHibernate.
With nHibernate I would create my Objects (or DTOs?) and use nHibernate methods for my CRUD interactions all in my DAL. But what I can't get my head around is the Objects defined in the DAL would be probably be better situated within the BL, i.e. where validation and other stuff can be performed easily, and I just use the DAL from the various ObjectFactory's / ObjectRepositories. Unfortunately it seems through the many articles I've read this isn't mentioned or skirted over and I'm a tad confused.
What is the more accepted or easier method of implementation when using nHibernate in a 3 Tier system? Alternatively, what is the conventional method of exposing objects through the business layer from the data layer to the presentation?
My personal experience with nHibernate has led me to decide that the data access layer becomes so thin it never has made any sense to me to separate it from the business logic. Much of your data access code is already separated into xml files (or various other distinctive methods like Fluent nHibernate) and since joins are handled almost transparently your queries using criteria objects are rarely more than a few lines.
I suspect you're overthinking this. nHibernate is basically a pretty simple tool; what it basically does is manage the serialization of your records in your database to and from similarly structured objects in your data model. That's basically it. Nothing says you can't encapsulate your Hibernate objects in Business Layer objects for validation; that's perfectly fine. But understand that the operations of validation and serialization are fundamentally different; Hibernate manages the serialization component, and does it quite nicely. You can consider the Hibernate-serializable objects as effectively "atomic".
Basically, what you want is this: nHibernate IS your Data Access Layer. (You can, of course, have other methods of Data Access in your Data Access Layer, but if you're going to use Hibernate, you should keep to the basic Hibernate data design, i.e. simple objects that perform a relatively straightforward mapping of record to object.) If your design requires that you use a different design (deeply composited objects dependent upon multiple overlapping tables) that doesn't map well into Hibernate, you might have to abandon using Hibernate; otherwise, just go with a simple POCO approach as implied by nHibernate.
I'm a fan of letting the architecture emerge, but this is what my starting architecture would look like on typical ntier asp.net mvc project if I were starting it today using NHibernate.
First off, I would try to keep as much domain code out of the controller as possible. Therefore, I would create a service layer / facade over the business layer that the controller (or code behind) makes calls to. I would split my objects into two types: 1) objects with business behavior that are used on the write side, and 2) ViewModel / DTO objects that are used for displaying data and taking the initial data entry. These DTO's would have all of the view specific concerns like simple validation attributes etc... The DTOs could have their own NHibernate mappings, or they could be projected using NHibernate's AliasToBean feature. They would be mapped to business objects once they get passed the controller in operations.
As far as the Data Access layer goes, I would probably would use NHibernate directly in the service layer. I would not use the repository pattern unless I knew that I had to be able to swap out the ORM. NHibernate is already a persistence abstraction. Putting a repository over it makes you give up a lot of features.
My 02 cents ( since there is no on-answer-fits-all):
The DAL should ONLY be responsible for data retrievel and persistence.
you can have a library containing your Model ( objects) which are filled by the DAL, but are loosly coupled (in theory, you should be able to write a new DAL using other technology and plug it instead of the NHIBERNATE one, even if you are not going to)
for client<-> BL talk, i would seriously advice views/dto's to avoid model coupling with the client (trust, me.. i'm cleaning up an application like this and it's hell)
anyways.. i'm talking about the situation we are using here which follows this structure:
client (winforms and web) <-> View/Presenter <-> WCF Services using messages <-> BL <-> DAL
I have NHibernate based apps in production and while it's better than most DALs, I'm at the point I could never recommend anyone use NHibernate any longer. The sophistication that is required to work with the session to do any advanced application is just absurd. For doing simple apps NHibernate is very trivial for enterprise application the complexity is off the charts.
At this point I've went to the decision to solve data access with 3 different choices depending on scope, using a document database (specifically Raven currently) for full scale application, for medium amounts of data access using LinqToSql and for trivial access I'm actually using raw ADO.NET connections with great success.
For reference, these statements are after I've spent 2+ years of development time using NHibernate and every time I've ever felt like I understood NHibernate fully I would run into some new limitation or giant monkey wrench I have to do deal with. It's also lead me to realize I started designing applications in regards to NHibernate which is one of my number one biggest reasons for using an ORM to not have my applications' design be dictated to by the database.
Not having to deal with session management with the complexity of NHibernate has been one of the largest boons to me for moving to RavenDB. With Raven you have very little need to manage the session except when you're doing extreme performance optimization or working with batch actions.

What's the best way to set up data access for an ASP.NET MVC project?

I am starting a new ASP.NET MVC project to learn with, and am wondering what's the optimal way to set up the project(s) to connect to a SQL server for the data. For example lets pretend we have a Product table and a product object I want to use to populate data in my view.
I know somewhere in here I should have an interface that gets implemented, etc but I can't wrap my mind around it today :-(
EDIT: Right now (ie: the current, poorly coded version of this app) I am just using plain old SQL server(2000 even) using only stored procedures for data access, but I would not be adverse to adding in an extra layer of flexability for using linq to sql or something.
EDIT #2: One thing I wanted to add was this: I will be writing this against a V1 of the database, and I will need to be able to let our DBA re-work the database and give me a V2 later, so it would be nice to only really have to change a few small things that are not provided via the database now that will be later. Rather than having to re-write a whole new DAL.
It really depends on which data access technology you're using. If you're using Linq To Sql, you might want to abstract away the data access behind some sort of "repository" interface, such as an IProductRepository. The main appeal for this is that you can change out the specific data access implementation at any time (such as when writing unit tests).
I've tried to cover some of this here:
I would check out Rob Conery's videos on his creation of an MVC store front. The series can be found here: MVC Store Front Series
This series dives into all sorts of design related subjects as well as coding/testing practies to use with MVC and other projects.
In my site's solution, I have the MVC web application project and a "common" project that contains my POCOs (plain ol' C# objects), business managers and data access layers.
The DAL classes are tied to SQL Server (I didn't abstract them out) and return POCOs to the business managers that I call from my controllers in the MVC project.
I think that Billy McCafferty's S#arp Architecture is a quite nice example of using ASP.NET MVC with a data access layer (using NHibernate as default), dependency injection (Ninject atm, but there are plans to support the CommonServiceLocator) and test-driven development. The framework is still in development, but I consider it quite good and stable. As of the current release, there should be few breaking changes until there is a final release, so coding against it should be okay.
I have done a few MVC applications and I have found a structure that works very nicely for me. It is based upon Rob Conery's MVC Storefront Series that JPrescottSanders mentioned (although the link he posted is wrong).
So here goes - I usually try to restrict my controllers to only contain view logic. This includes retrieving data to pass on to the views and mapping from data passed back from the view to the domain model. The key is to try and keep business logic out of this layer.
To this end I usually end up with 3 layers in my application. The first is the presentation layer - the controllers. The second is the service layer - this layer is responsible for executing complex queries as well as things like validation. The third layer is the repository layer - this layer is responsible for all access to the database.
So in your products example, this would mean that you would have a ProductRepository with methods such as GetProducts() and SaveProduct(Product product). You would also have a ProductService (which depends on the ProductRepository) with methods such as GetProductsForUser(User user), GetProductsWithCategory(Category category) and SaveProduct(Product product). Things like validation would also happen here. Finally your controller would depend on your service layer for retrieving and storing products.
You can get away with skipping the service layer but you will usually find that your controllers get very fat and tend to do too much. I have tried this architecture quite a few times and it tends to work quite nicely, especially since it supports TDD and automated testing very well.
For our application I plan on using LINQ to Entities, but as it's new to me there is the possiblity that I will want to replace this in the future if it doesn't perform as I would like and use something else like LINQ to SQL or NHibernate, so I'll be abstracting the data access objects into an abstract factory so that the implementation is hidden from the applicaiton.
How you do it is up to you, as long as you choose a proven and well know design pattern for implementation I think your final product will be well supported and robust.
Use LINQ. Create a LINQ to SQL file and drag and drop all the tables and views you need. Then when you call your model all of your CRUD level stuff is created for you automagically.
LINQ is the best thing I have seen in a long long time. Here are some simple samples for grabbing data from Scott Gu's blog.
LINQ Tutorial
I just did my first MVC project and I used a Service-Repository design pattern. There is a good bit of information about it on the net right now. It made my transition from Linq->Sql to Entity Framework effortless. If you think you're going to be changing a lot put in the little extra effort to use Interfaces.
I recommend Entity Framework for your DAL/Repository.
Check out the Code Camp Server for a good reference application that does this very thing and as #haacked stated abstract that goo away to keep them separated.
i think you need a orm.
for example entity framework(code first)
you can create some class for model.
use these models for you logic and view,and mapping them to db(v1).
when dba give you new db(v2),only change the mapping config.(v1 and v2 are all rdb,sql server,mysql,oracel...),if db(v1) is a rdb and db(v2) is a nosql(mongo,redis,couchbase...),that's not work
may be need do some find and replace

Categories