As every body knows that Object oriented languages provide reusability features.
I have a simple three tier application:
presentation layer
business layer is designed to reap the benefits of reusability
datalayer is a dumb ado.net library(by dumb I meant that it has no business logic in place.)
I have been struggling to enforce code reusability in this datalayer. I am pasting a pseudo pattern in one of my methods in datalayer.
create connection object
open connection to database
create a transaction object
begin transaction
create command object
execute it
.
.
.
.
create nth command object
execute it
commit transaction
close connection
In reality this code swells to around 300 to 400 lines of code and it becomes impossible to read this code.
In this series of command executions we are selecting / inserting / updating queries on different tables. If it were not to be in transaction, I would have separated this code to their respective classes.
There is again a spaghetti pattern which I recently encountered:
business layer method1 calls datalayer method to update column1
business layer method2 calls datalayer method to update column2
businees layer method3 calls datalayer method to save the entire result in table by updating it.
This pattern emerged when I was trying to reap the benefits of reusability, these methods are called from different locations so they were reused. However, if were to write simple sql query not keeping in mind of reusability there would have been a single call to database.
So, is there any pattern or technique by which reusability can be achieved in data layer?
Note:
I don't want to use any stored procedures despite the fact that they
offer precompilation benefits and etc. , as they tend to tie
datalayer more specific to a particular database.
I am also currently not considering any ORM solutions here only
plain ADO.net.
Excuses for not considering any ORMs.
Learning curve
Avoiding tight coupling to a specific ORM which I think can be
removed by restricting the ORM code in datalayer itself.
I checked the internet some time 6 months ago and there were only
two popular or widely used ORM solutions available then. Entity
Framework and NHibernate. I choose Entity Framework (for some reasons
I will link later link1, link2 besides that I had a feeling that working with EF would be easy as it is provided by Microsoft) to start learning.
I used this Microsoft recommended
book in this book
there were three techniques as I understood
TPT,
TPH and
TPC; TPC I never tried.
When I checked the SQL generated from the Entity Framework, it was
very ugly and was creating some extra columns: Ids, some ugly Case
statements etc., it seemed that for a highly transactional system
the ORM solution cannot be applied.By highly transactional system I mean 1000 of insertions happening every single minute. The database continues to swell in size and reaches somewhere near 500 to 600 GBs in some distant future.
I agree with the comments to your question; you should really avoid re-inventing the wheel here and go with an ORM, if at all possible. Speaking from experience, you're going to end up writing code and solving problems that have long ago been solved and it will probably take you more time in the long run. However, I understand that sometimes there are constraints that don't permit the use of an ORM.
Here are some articles that I have found helpful:
This first article is an old one but it explains the different options that you have for data access design patterns. It has a few different patterns and only you can really decide which one will be best for you but it sounds like you might want to look at the Repository Pattern:
http://msdn.microsoft.com/en-us/magazine/dd569757.aspx
This next article is the first in a series that talks about how to implement a repository pattern with a data mapper which, based on your example above, will probably help to reduce some of your redundant code.
http://blogsprajeesh.blogspot.com/2010/02/data-access-layer-in-c-using-repository.html
Finally, depending on how you implement your data access pattern, you may find the template pattern and generics helpful. The following article talks about that a little bit and you can glean some helpful information from it:
http://www.c-sharpcorner.com/UploadFile/rmcochran/elegant_dal05212006130957PM/elegant_dal.aspx
Without knowing more about your project, it's hard to say, exactly, which pattern will best suit your needs. However, using a combination of the Unit of Work pattern with repositories and data mappers will probably help you to reuse some code and manage your data access.
What I'm not seeing is your model layer.
You have a business layer, and a DAO layer, but no model.
business layer method1 calls datalayer method to update column1
business layer method2 calls datalayer method to update column2
businees layer method3 calls datalayer method to save the entire result in table by updating it.
Why isn't this:
business layer updates model/domain object A
business layer updates model/domain object A in a different way
business layer persists model/domain to database through data layer.
This way you get re-use, and avoid repeated loops back and forth to database.
Ultimately it sounds like your business layer knows FAR too much of the database data model. You need business objects, not just business methods.
Related
I have two separate databases. Database A is small and contains 6 tables and some stored procedures, database B is larger and contains 52 tables, 159 views, and some stored procedures. I have been tasked with creating an application that does operations on both of these databases. There is already code that has been written that accesses these databases but it is not testable and I would like to create the DAL code from scratch. So I made a code first Entity Framework connection and let it create all of the POCOs for me as well as the Context classes. Now I have 3 DatabaseContext classes. 1 for database A which is just the 6 separate tables. 1 for the tables from database B and 1 for the views from database B. I split the Context class for database B into 2 separate context classes because the tables and the views are used in 2 different scenarios.
I have created 2 projects in a single solution. DomainModel and DataAccessLayer. The DomainModel project holds all of the POCOs which the Entity Framework code first generated for me. DataAccessLayer is where the DatabaseContexts are and where I would like to create the code which other projects can use to access the database. My goal being to make a reusable and testable data access layer code.
The QUESTION is: What do I create in the DataAccessLayer project to make the DAL resusable in other projects that want to be able to talk to the database in an abstract way and what is the best way to structure it?
What I have done:
In my search on the internet I have seen that people recommend the repository pattern (sometimes with UnitOfWork) and it seems like that is the way to go, so I created an interface called IGenericRepository and an implementation called GenericEntityFrameworkRepository. The interface has methods such as GetByID, Insert, Delete, and Update. The implementation takes a DbContext as a parameter in the constructor and implements all of the methods. The next step might be to create specific implementations for the different tables and views but that is going to be a very tedious thing to do since there are so many tables and views.
I have also read that Entity Framework IS the DAL. Then it might be enough to just create another project that holds the business logic and returns IEnumerables. But that seems like it will be hard to test.
What I am trying to achieve is a well structured and testable foundation to access our database that can be tested thoroughly and expanded as other projects start to require other functionality to the database.
The folder structure for the project can be seen in the following picture:
Design and architecture is all about trade-offs. Be careful when you want to consider abstracting your data access for purposes of reuse. Within an application this leads to overly complex and poorly performing solutions to what can be a very simple and fast solution.
For instance, if I want to build a business domain layer as a set of services to be consumed by a number of different applications (web applications and services/APIs that will serve mobile apps or third party interests) then that domain layer becomes a boundary that defines what information will come in, and what information will go out. Everything in and out of that abstraction should be DTOs that reflect the domain that I am making available via the actions I make available. Within that boundary, EF should be leveraged to perform the projection needed to produce the relevant DTOs, or update entities based on the specific details passed in. (After verification) The trade-off is that every consumer will need to make due with this common access point, so there is little wiggle room to customize what comes out unless you build it in. This is preferable to having everything just do their own thing accessing the database and running custom queries or calling Stored Procedures.
What I see most often though is that within a single solution, developers want to abstract the system not to "know" that the data domain is provided by EF. The application passes around POCO Entities or DTOs and they implement things like Generic Repositories with GetById, GetAll, Update, Upsert, etc. This is a very, very poor design decision because it means that your data domain cannot take advantage of things like projection, pagination, custom filtering, ordering, eager loading related data, etc. Abstracting the EF DbContext is certainly a valid objective for enabling unit testing, but I strongly recommend avoiding the performance pitfall of a Generic Repository pattern. IMO attempting to abstract away EF just for the sake of hiding it or making it swappable is either a poor performing and/or an overly complex mess.
Take for example a typical GetAll() method.
public IEnumerable<T> GetAll()
{
return _context.Set<T>().ToList();
}
What happens if you only want orders from the last 3 months?
var orders = orderRepository.GetAll()
.Where(x => x.OrderDate >= startDate).ToList();
The issue here is that GetAll() will always return all rows. If you have 4 million orders you can appreciate that is not desirable.
You could make an OrderRepository that extends Repository to implement a method like GetOrdersAfter(DateTime) but soon, what is the point of the Generic Repository? The next option would be to pass something like an Expression<Func<TEntity>> Where clause into the GetAll() method. There are plenty of examples of that out there. However, doing that is leaking EF-isms into the consuming code. The Where clause expression has to conform to EF rules. For instance passing order => order.SomeUnmappedProperty == someValue would cause the query to fail as EF won't be able to convert that down to SQL. Even if you're Ok with that the GetAll method will need to start looking like:
IEnumerable<TEntity> GetAll(Expression<Func<TEntity>> whereClause = null,
IEnumerable<OrderByClause> orderByClauses = null,
IEnumerable<string> includes = null,
int? pageNumber = null, int? pageSize = null )
or some similar monstrosity to handle filtering, ordering, eager loading, and pagination, and this doesn't even cover projection. For that you end up with additional methods like:
IEnumerable<TEntity, TDTO> GetAll(Expression<Func<TEntity>> whereClause = null,
IEnumerable<OrderByClause> orderByClauses = null,
IEnumerable<string> includes = null,
int? pageNumber = null, int? pageSize = null )
which will call the other GetAll then perform a Mapper.Map to return a desired DTO/ViewModel that is hopefully registered in a mapping dependency. Then you need to consider async vs. synchronous flavours of the methods. (or forcing all calls to be synchronous or async)
Overall my advice would be to avoid Generic Repository patterns and don't abstract away EF just for the sake of abstraction, instead look for ways to leverage the most you can out of it. Most of the problems developers run into with performance and odd behaviour with EF stem from efforts to abstract it. They are so worried about needing to abstract it to be able to replace it if they need to, that they end up with those performance and complexity problems entirely because of the limitations imposed by the abstraction. (A self-fulfilling prophecy) "Architecture" can be a dirty word when you try to prematurely optimize a solution to solve a problem you don't currently, and probably will never face. Always keep YAGNI and KISS at the forefront of all design considerations.
I guess you tumbled between selecting right architecture for a given solution. a software must be created for meet the business needs. so depending on the business, complexity of the domain, we have to choose the write architecture.
Microsoft elaborated this in a detail PDF you will find it here
https://learn.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/
You want some level of abstraction between your Data and the BL then "N-Layer" architecture is not the best solution. even though i am not telling the N-Layer entirely out dated but there is more suitable solutions for this problem .
And if you using EF or EF core then there is no need to implement Repository cus EF itself contain UOW and Repositories.
Clean Architecture introduce higher level of abstraction between layers. it abstract away your data layer and make it reusable over different project or different solutions. Clean architecture is a big topic i can't describe here in more details. but i learn the subject from him.
Jason Taylor- Clean Architecture
https://www.youtube.com/watch?v=dK4Yb6-LxAk
https://github.com/jasontaylordev/CleanArchitecture
We are using .net C# 4.0, VS 2010, EF 4.1 and legacy code in this project we are working on.
I'm working on a win form project where I have made a decision to start using entity framework 4.1 for accessing an ms sql db. The code base is quite old and we have an existing data layer that uses data adapters. These data adapters are used all over the place (in web apps and win form apps) My plan is to replace the old db access code with EF over time and get rid for the tight coupling between UI layers and data layer.
So my idea is to more or less combine EF with the legacy data access layer and slowly replace the legacy data layer with a more modern take on things using EF. So for now we need to use both EF and the legacy db access code.
What I have done so far is to add a project containing the edmx file and context. The edmx is generated using database first approach. I have also added another project that contains the POCO classes (by using ADO.NET POCO Entity Generator). I have more or less followed Julia Lerman's approach in her book "Programming Entity Framework" on how to split the model and the generated POCO classes. The database model has been set for years and it's not an option the change the table and the relationships, triggers, stored procedures, etc, so I'm basically stuck with the db model as it is.
I have read about the repository pattern and unit of work and I kind of like the patterns, but I struggle to implement them when I have both EF and the legacy db access code to deal with. Specially when I don't have the time to replace all of the legacy db access code with a pure EF implementation. In an perfect world I would start all over again with a fresh take one the data model, but that is not an option here.
Is the repository and unit of work patterns the way to go here? In order to use the POCO classes in my business layer, I sometimes need to use both EF and the legacy db code to populate my POCO classes. In another words, I can sometimes use EF to retrieve a part of the data I need and the use the old db access layer to retrieve the rest of the data and then map the data to my POCO classes. When I want to update some data I need to pick data from the POCO classes and use the legacy data access code to store the data in the database. So I need to map the data retrieved from the legacy data access layer to my POCO classes when I want to display the data in the UI and vice versa when I want to save data to the data base.
To complicate things we store some data in tables that we don't know the name of before runtime (Please don't ask me why:-) ). So in the old db access layer, we had to create sql statements on the fly where we inserted the table and column names based on information from other tables.
I also find that the relationships between the POCO classes are somewhat too data base centric. In another words, I feel that I need to have a more simplified domain model to work with. Perhaps I should create a domain model that fits the bill and then use the POCO classes as "DAO's" to populate the domain model classes?
How would you implement this using the Repository pattern and Unit of Work pattern? (if that is the way to go)
Alarm bells are ringing for me! We tried to do something similar a while ago (only with nHibernate not EF4). We had several problems running ADO.NET along side an ORM - database concurrency being a big one.
The database model has been set for
years and it's not an option the
change the table and the
relationships, triggers, stored
procedures, etc, so I'm basically
stuck with the db model as it is.
Yep. Same thing! The problem was that our stored procs contained a lot of business logic and weren't simple CRUD procs so keeping the ORM updated with the various updates performed by a stored procedure was not easy at all - Single Responsibility Principle - not a good one to break!
My plan is to replace the old db
access code with EF over time and get
rid for the tight coupling
between UI layers and data layer.
Maybe you could decouple without the need for an ORM - how about putting a service/facade layer infront of your UI layer to coordinate all interactions with the underlying domain and hide it from the UI.
If your database is 'king' and your app is highly data driven I think you will always be fighting an uphill battle implementing the patterns you mention.
Embrace ado.net for this project - use EF4 and DDD patterns on your next green field proj :)
EDMX + POCO class generator results in EFv4 code, not EFv4.1 code but you don't have to bother with these details. EFv4.1 offers just different API which does exactly the same (and it is only wrapper around EFv4 API).
Depending on the way how you use datasets you can reach some very hard problems. Datasets are representation of the change set pattern. They know what changes were done to data and they are able to store just these changes. EF entities know this only if they are attached to the context which loaded them from the database. Once you work with detached entities you must make a big effort to tell EF what has changed - especially when modifying relations (detached entities are common scenario in web applications and web services). For those purposes EF offers another template called Self-tracking entities but they have another problems and limitations (for example missing lazy loading, you cannot apply changes when entity with the same key is attached to the context, etc.).
EF also doesn't support several features used in datasets - for example unique keys and batch updates. It's fun that newer MS APIs usually solve some pains of previous APIs but in the same time provide much less features then previous APIs which introduces new pains.
Another problem can be with performance - EF is slower then direct data access with datasets and have higher memory consumption (and yes there are some memory leaks reported).
You can forget about using EF for accessing tables which you don't know at design time. EF doesn't allow any dynamic behavior. Table names and the type of database server are fixed in mapping. Another problems can be with the way how you use triggers - ORM tools don't like triggers and EF has limited features when working with database computed values (possibility to fill value in the database or in the application is disjunctive).
The way of filling POCOs from EF + Datasets sounds like this will not be possible when using only EF. EF has some allowed mapping patterns but possibilities to map several tables to single POCO class are extremely limited and constrained (if you want to have these tables editable). If you mean just loading one entity from EF and another entity from data adapter and just make reference between them you should be OK - in this scenario repository sounds like reasonable pattern because the purpose of the repository is exactly this: load or persist data. Unit of work can be also usable because you will most probably want to reuse single database connection between EF and data adapters to avoid distributed transaction during saving changes. UoW will be the place responsible for handling this connection.
EF mapping is related to database design - you can introduce some object oriented modifications but still EF is closely dependent on the database. If you want to use some advanced domain model you will probably need separate domain classes filled from EF and datasets. Again it will be responsibility of repository to hide these details.
From how much we have implemented, I have learned following things.
POCO and Self Tracking objects are difficult to deal with, as if you do not have easy understanding of what goes inside, there will be number of unexpected behavior which may have worked well in your previous project.
Changing pattern is not easy, so far we have been managing simple CRUD without unit of work and identity map pattern. Now lot of legacy code that we wrote in past does not consider these new patterns and the logic will not work correctly.
In our previous code, we were simply using transactions and single insert/update/delete statement that was directly sent to database assuming transactions on server side will take care of all operations.
In such conditions, we were directly dealing with IDs all the time, newly generated IDs were immediately available after single insert statement, however this is not case with EF.
In EF, we are not dealing with IDs, we are dealing with navigation properties, which is a huge change from earlier ADO.NET programming methods.
From our experience we found that only replacing EF with earlier data access code will result in chaos. But EF + RIA Services offer you a completely new solution where you will probably get everything you need and your UI will very easily bind to it. So if you are thinking about complete rewriting using UI + RIA Services + EF, then it is worth, because lot of dependency in query management reduces automatically. You will be focusing only on business logic, but this is a big decision and the amount of man hours required in complete rewriting or just replacing EF is almost same.
So we went UI + RIA Services + EF way, and we started replacing one one module. Mostly EF will easily co-exist with your existing infrastructure so there is no harm.
I'm getting the chance to develop a mildly complex project and have been investigating the various approaches that I can use to tackle this project. Typically I would have ran with the traditional 3-Tier approach but after spending some time looking around at various options I've got an inkling that some kind of ORM might be a better fit and I'm considering nHibernate. However, I'm looking for some guidance on implementing nHibernate and more specifically how I would structure my BL and DAL in conjunction with nHibernate.
With nHibernate I would create my Objects (or DTOs?) and use nHibernate methods for my CRUD interactions all in my DAL. But what I can't get my head around is the Objects defined in the DAL would be probably be better situated within the BL, i.e. where validation and other stuff can be performed easily, and I just use the DAL from the various ObjectFactory's / ObjectRepositories. Unfortunately it seems through the many articles I've read this isn't mentioned or skirted over and I'm a tad confused.
What is the more accepted or easier method of implementation when using nHibernate in a 3 Tier system? Alternatively, what is the conventional method of exposing objects through the business layer from the data layer to the presentation?
My personal experience with nHibernate has led me to decide that the data access layer becomes so thin it never has made any sense to me to separate it from the business logic. Much of your data access code is already separated into xml files (or various other distinctive methods like Fluent nHibernate) and since joins are handled almost transparently your queries using criteria objects are rarely more than a few lines.
I suspect you're overthinking this. nHibernate is basically a pretty simple tool; what it basically does is manage the serialization of your records in your database to and from similarly structured objects in your data model. That's basically it. Nothing says you can't encapsulate your Hibernate objects in Business Layer objects for validation; that's perfectly fine. But understand that the operations of validation and serialization are fundamentally different; Hibernate manages the serialization component, and does it quite nicely. You can consider the Hibernate-serializable objects as effectively "atomic".
Basically, what you want is this: nHibernate IS your Data Access Layer. (You can, of course, have other methods of Data Access in your Data Access Layer, but if you're going to use Hibernate, you should keep to the basic Hibernate data design, i.e. simple objects that perform a relatively straightforward mapping of record to object.) If your design requires that you use a different design (deeply composited objects dependent upon multiple overlapping tables) that doesn't map well into Hibernate, you might have to abandon using Hibernate; otherwise, just go with a simple POCO approach as implied by nHibernate.
I'm a fan of letting the architecture emerge, but this is what my starting architecture would look like on typical ntier asp.net mvc project if I were starting it today using NHibernate.
First off, I would try to keep as much domain code out of the controller as possible. Therefore, I would create a service layer / facade over the business layer that the controller (or code behind) makes calls to. I would split my objects into two types: 1) objects with business behavior that are used on the write side, and 2) ViewModel / DTO objects that are used for displaying data and taking the initial data entry. These DTO's would have all of the view specific concerns like simple validation attributes etc... The DTOs could have their own NHibernate mappings, or they could be projected using NHibernate's AliasToBean feature. They would be mapped to business objects once they get passed the controller in operations.
As far as the Data Access layer goes, I would probably would use NHibernate directly in the service layer. I would not use the repository pattern unless I knew that I had to be able to swap out the ORM. NHibernate is already a persistence abstraction. Putting a repository over it makes you give up a lot of features.
My 02 cents ( since there is no on-answer-fits-all):
The DAL should ONLY be responsible for data retrievel and persistence.
you can have a library containing your Model ( objects) which are filled by the DAL, but are loosly coupled (in theory, you should be able to write a new DAL using other technology and plug it instead of the NHIBERNATE one, even if you are not going to)
for client<-> BL talk, i would seriously advice views/dto's to avoid model coupling with the client (trust, me.. i'm cleaning up an application like this and it's hell)
anyways.. i'm talking about the situation we are using here which follows this structure:
client (winforms and web) <-> View/Presenter <-> WCF Services using messages <-> BL <-> DAL
I have NHibernate based apps in production and while it's better than most DALs, I'm at the point I could never recommend anyone use NHibernate any longer. The sophistication that is required to work with the session to do any advanced application is just absurd. For doing simple apps NHibernate is very trivial for enterprise application the complexity is off the charts.
At this point I've went to the decision to solve data access with 3 different choices depending on scope, using a document database (specifically Raven currently) for full scale application, for medium amounts of data access using LinqToSql and for trivial access I'm actually using raw ADO.NET connections with great success.
For reference, these statements are after I've spent 2+ years of development time using NHibernate and every time I've ever felt like I understood NHibernate fully I would run into some new limitation or giant monkey wrench I have to do deal with. It's also lead me to realize I started designing applications in regards to NHibernate which is one of my number one biggest reasons for using an ORM to not have my applications' design be dictated to by the database.
Not having to deal with session management with the complexity of NHibernate has been one of the largest boons to me for moving to RavenDB. With Raven you have very little need to manage the session except when you're doing extreme performance optimization or working with batch actions.
I was avoiding writing what may seem like another thread on .net arch/n-tier architecture, but bear with me.
I, hopefully like others still am not 100% satisfied or clear on the best approach to take given today's trends and new emerging technologies when it comes to selecting an architecture to use for enterprise applications.
I suppose I am seeking mass community opinion on the direction and architectural implementation you would chose when building an enterprise application utilising most facets of today's .NET technology and what direction you would take. I better make this about me and my question, in fear of this being too vague otherwise; I would like to improve my architecture to improve and would really like to hear what you guys think given the list of technologies I am about to write.
Any and all best practices and architectural patterns you would suggest are welcome and if you have created a solution before for a similar type setup, any pitfalls or caveats you may have hit or overcome.
Here is a list of technologies adopted in my latest project, yep pretty much everything except WPF :)
Smart Client (WinForms)
WCF
Used by Smart Client
ASP.NET MVC
Admin tool
Client tool
LINQ to SQL
Used by WCF
Used ASP.NET MVC
Microsoft SQL Server 2008
Utilities and additional components to consider:
Dependency Injection - StructureMap
Exception Management - Custom
Unit Testing - MBUnit
I currently have this running in an n-Tier arch. adopting a Service-based design pattern utilising Request/Response (Not sure of it's formal name) and the Repository pattern, most of my structure was adopted from Rob Conery's Storefront.
I suppose I am more or less happy with most of my tiers (It's really just the DAL which I am a bit uneasy on).
Before I finish, these are the real questions I have faced with my current architecture:
I have a big question mark on if I should have a custom data access layer given the use of LINQ to SQL. Should I perform LINQ to SQL directly in my service/business layer or in a DAL in a repository method? Should you create a new instance of your DB context in each repository method call (using using())? or one in the class constructor/through DI?
Do you believe we can truly use POCO (plain old CLR objects) when using LINQ to SQL? I would love to see some examples as I encountered problems and it would have been particularly handy with the WCF work as I can't obviously be carrying L2S objects across the wire.
Creating an ASP.NET MVC project by itself quite clearly displays the design pattern you should adopt, by keeping view logic in the view, controller calling service/business methods and of course your data access in the model, but would you drop the 'model' facet for larger projects, particularly where the data access is shared, what approach would you take to get your data?
Thanks for hearing me out and would love to see sample code-bases on architectures and how it is split. As said I have seen Storefront, I am yet to really go through Oxite but just thought it would benefit myself and everyone.
Added additional question in DAL bullet point. / 15:42 GMT+10
To answer your questions:
Should I perform LINQ to SQL directly in my service/business layer or in a DAL in a repository method? LINQ to SQL specifically only makes sense if your database maps 1-to-1 with your business objects. In most enterprise situations that's not the case and Entities is more appropriate.
That having been said, LINQ in general is highly appropriate to use directly in your business layer, because the LINQ provider (whether that is LINQ to SQL or something else) is your DAL. The benefit of LINQ is that it allows you to be much more flexible and expressive in your business layer than DAL.GetBusinessEntityById(id), but the close-to-the-metal code which makes both LINQ and the traditional DAL code work are encapsulated away from you, having the same positive effect.
Do you believe we can truly use POCO (plain old CLR objects) when using LINQ to SQL? Without more specific info on your POCO problems regarding LINQ to SQL, it's difficult to say.
would you drop the 'model' facet for larger projects The MVC pattern in general is far more broad than a superficial look at ASP.NET MVC might imply. By definition, whatever you choose to use to connect to your data backing in your application becomes your model. If that is utilizing WCF or MQ to connect to an enterprise data cloud, so be it.
When I looked at Rob Connery's Storefront, it looked like he is using POCOs and Linq to SQL; However, he is doing it by translating from the Linq to SQL created entities to POCO (and back), which seems a little silly to me - Essentially we have a DAL for a DAL.
However, this appears to be the only way to use POCOs with Linq to SQL.
I would say you should use a Repository pattern, let it hide your Linq to SQL layer(or whatever you end up using for data access). That doesn't mean you can't use Linq in the other tiers, just make sure your repository returns IQueryable<T>.
Whether or not you use LINQ-to-SQL, it is often cmomon to use a separate DTO object for things like WCF. I have cited a few thoughts on this subject here: Pragmatic LINQ - but for me, the biggest is: don't expose IQueryable<T> / Expression<...> on the repository interface. If you do, your repository is no longer a black box, and cannot be tested in isolation, since it is at the whim of the caller. Likewise, you can't profile/optimise the DAL in isolation.
A bigger problem is the fact that IQueryable<T> leaks (LOLA). For example, Entity Framework doesn't like Single(), or Take() without an explicit OrderBy() - but L2S is fine with that. L2S should be an implementation detail of the DAL - it shouldn't define the repository.
For similar reasons, I mark the L2S association properties as internal - I can use them in the DAL to create interesting queries, but...
I am starting a new ASP.NET MVC project to learn with, and am wondering what's the optimal way to set up the project(s) to connect to a SQL server for the data. For example lets pretend we have a Product table and a product object I want to use to populate data in my view.
I know somewhere in here I should have an interface that gets implemented, etc but I can't wrap my mind around it today :-(
EDIT: Right now (ie: the current, poorly coded version of this app) I am just using plain old SQL server(2000 even) using only stored procedures for data access, but I would not be adverse to adding in an extra layer of flexability for using linq to sql or something.
EDIT #2: One thing I wanted to add was this: I will be writing this against a V1 of the database, and I will need to be able to let our DBA re-work the database and give me a V2 later, so it would be nice to only really have to change a few small things that are not provided via the database now that will be later. Rather than having to re-write a whole new DAL.
It really depends on which data access technology you're using. If you're using Linq To Sql, you might want to abstract away the data access behind some sort of "repository" interface, such as an IProductRepository. The main appeal for this is that you can change out the specific data access implementation at any time (such as when writing unit tests).
I've tried to cover some of this here:
I would check out Rob Conery's videos on his creation of an MVC store front. The series can be found here: MVC Store Front Series
This series dives into all sorts of design related subjects as well as coding/testing practies to use with MVC and other projects.
In my site's solution, I have the MVC web application project and a "common" project that contains my POCOs (plain ol' C# objects), business managers and data access layers.
The DAL classes are tied to SQL Server (I didn't abstract them out) and return POCOs to the business managers that I call from my controllers in the MVC project.
I think that Billy McCafferty's S#arp Architecture is a quite nice example of using ASP.NET MVC with a data access layer (using NHibernate as default), dependency injection (Ninject atm, but there are plans to support the CommonServiceLocator) and test-driven development. The framework is still in development, but I consider it quite good and stable. As of the current release, there should be few breaking changes until there is a final release, so coding against it should be okay.
I have done a few MVC applications and I have found a structure that works very nicely for me. It is based upon Rob Conery's MVC Storefront Series that JPrescottSanders mentioned (although the link he posted is wrong).
So here goes - I usually try to restrict my controllers to only contain view logic. This includes retrieving data to pass on to the views and mapping from data passed back from the view to the domain model. The key is to try and keep business logic out of this layer.
To this end I usually end up with 3 layers in my application. The first is the presentation layer - the controllers. The second is the service layer - this layer is responsible for executing complex queries as well as things like validation. The third layer is the repository layer - this layer is responsible for all access to the database.
So in your products example, this would mean that you would have a ProductRepository with methods such as GetProducts() and SaveProduct(Product product). You would also have a ProductService (which depends on the ProductRepository) with methods such as GetProductsForUser(User user), GetProductsWithCategory(Category category) and SaveProduct(Product product). Things like validation would also happen here. Finally your controller would depend on your service layer for retrieving and storing products.
You can get away with skipping the service layer but you will usually find that your controllers get very fat and tend to do too much. I have tried this architecture quite a few times and it tends to work quite nicely, especially since it supports TDD and automated testing very well.
For our application I plan on using LINQ to Entities, but as it's new to me there is the possiblity that I will want to replace this in the future if it doesn't perform as I would like and use something else like LINQ to SQL or NHibernate, so I'll be abstracting the data access objects into an abstract factory so that the implementation is hidden from the applicaiton.
How you do it is up to you, as long as you choose a proven and well know design pattern for implementation I think your final product will be well supported and robust.
Use LINQ. Create a LINQ to SQL file and drag and drop all the tables and views you need. Then when you call your model all of your CRUD level stuff is created for you automagically.
LINQ is the best thing I have seen in a long long time. Here are some simple samples for grabbing data from Scott Gu's blog.
LINQ Tutorial
I just did my first MVC project and I used a Service-Repository design pattern. There is a good bit of information about it on the net right now. It made my transition from Linq->Sql to Entity Framework effortless. If you think you're going to be changing a lot put in the little extra effort to use Interfaces.
I recommend Entity Framework for your DAL/Repository.
Check out the Code Camp Server for a good reference application that does this very thing and as #haacked stated abstract that goo away to keep them separated.
i think you need a orm.
for example entity framework(code first)
you can create some class for model.
use these models for you logic and view,and mapping them to db(v1).
when dba give you new db(v2),only change the mapping config.(v1 and v2 are all rdb,sql server,mysql,oracel...),if db(v1) is a rdb and db(v2) is a nosql(mongo,redis,couchbase...),that's not work
may be need do some find and replace