Linq to entities and business logic - c#

Whenever I read an article on linq to entities, I sounds like I should embed my business logic in the generated entity-classes.
This would means that the "user" of the business-objects (controller, service-layer, ...) would have to know about the datacontext-object that's needed to use linq.
It would also mean that the DAL-logic and the business-logic will be mixed up.
A lot of microsoft examples use some kind of DTO-approach. I'm not a big fan of the DTO pattern.
So should I let the business-object encapsulate the linq-entity and provide access to it using properties, or should I stick with the DTO pattern?
What are your suggestions?
Thanks

The entity model generates partial classes. In a project of mine I'm using an entity model inside a class library and added a few .cs files to it that add functionality to the default entity classes. (Mostly functions related to error and message logging to a database table and a method to import/export all data to XML.)
But true business logic is located in a second class library that refers to this entity class library.
Let's explain. First I created an entity model from a database. It contains a list of company names and addresses. I do this by choosing "New project|Class library" and then add an ADO.NET Entity Data Model to this library. The Entity model will be linked to my SQL Server database, importing the tables I need and auto-creates classes for the tables I want to access.
I then add a second .cs file for every table that I want to expand. This would be a few raw methods that are strongly linked to the database. (Mostly import/export methods and error logging.) This I will call Entity.Model, which will compile to Entity.Model.dll.
Then I add a second project which will contain the business logic. Again, I use "New project|Class library" to create it and add Entity.Model.dll to it's references. I then start writing classes which will translate the database-specific classes to more logical classes. In general, I would not have to make many changes except that I will protect or hide certain fields and add a few calculated fields. The business logic would only expose the functionality that I want to access from my client application and not a single method more. Thus, if I don't allow the client to delete companies then the "Delete" function in the entity model will not be exposed in the business layer. And maybe I want to send a notification when a company changes their address, so I'll add an event that gets triggered when the address field of the company is changed. (Which will write a message log or whatever.) I'll call this business.logic and it compiles to Business.Logic.dll.
Finally, I'll create the client application and will add a reference to Business.Logic.dll. (But NOT to the entity model.) I can now start to write my application and access the database through the business layer. The business layer will do validations, trigger a few events and do whatever else in addition to modifying the database through the entity model. And the entity model just serves to keep the database relations simple, allowing me to "walk through" the data through all foreign links in the database.

I wouldn't edit the generated files as they are likely to change.
What you could do is wrap them some query object and pass that around.
ayende makes a very good point about where the DAL should really live
Also, you should be a fan of viewmodels/dtos ;)

I prefer to wrap calls to the entity classes in business classes. (Call it BC for short.) Each BC has several constructors, one or more of which allows the context to be passed to it. This allows one BC to call another BC and load related data in the same context. This also makes working with collections of BCs that need a shared context simpler.
In the constructors that don't take a context, I create a new context in the constructor itself. I make the context available through a read-only property that other BCs can access.
I write using the MVVM pattern. The nice thing about this approach is that so far I have been able to write my viewmodels without ever refering to the data context, just the BCs that form my models. Hence if I need to modify my data access (replace entity framework, or upgrade to version 4 when it is out of beta) my views and viewmodels are isolated from the changes required in the BCs.
Not sure this is the best approach, but so far I have liked the results. It does take tweaking to get the BCs designed correctly, but I end up with a flexible, extensible, and maintainable set of classes.

Related

Using EF and Passing Data Through Application Layers

All,
We are using EF as our primary data access technology. Like many apps out there, we have a business objects/domain layer. This layers talks to our repository, which, in turn, talks to EF.
My question is: What is the best mechanism for passing the data back and forth to/from EF? Should we use the EF-generated entity classes (we did DB-first development, so we have entity classes that EF generated), create our own DTOs, use JSON or something else?
Of course, I could make an argument for each of these, as well as a counter-argument against them. I'm looking for opinions based on experience building a non-trivial application using a layered architecture and EF.
Thanks,
John
I would use POCOs and use them with EF. You can still do that with the DB first approach.
The main benefit is that your business objects will not be tied to any data access technology.
Your underlying storage mechanism can, and will, change but your POCOs remain. All that business logic is easily re-used and tested.
As you're looking for cons, then I would say it might take longer. However, that cost is well worth it.
With t4 templates I put the actual EF generated entities in a common project that is referenced by all other projects. I use the EF database first created models through the entire application (including use as view models). If I need to add additional properties to an entity that are not in the database I just extend the partial class of the entity in the common project. I have written dozens and large nTier applications using this model and its worked great.

Using Entity Framework with Existing Models

I am working on a MVC4 project which will need to use a number of different databases, each with a few stored procedures for searching. The site is an asset search tool which needs to query various existing systems. If I allow the EF to generate models on its own, I will end up with a Model for each procedure I use in each database.
What I would prefer is to have my own POCO model already defined and the EF maps its results to that Model. So regardless of what database the data is taken from it maps back to that same Model. The column names in each database differ slightly so it would really need to be mapping columns to model properties.
There is no writing back to the database, it purely selects data out.
On the 'Edit Function Import' form I can create a model based on the results. There is also an option to view 'Function Import Mapping' but it does not appear to do what I am looking for.
Has anyone else tried this?
Added an image to help explain the issue
The closest to this I have managed so far is to have EDMX1 query 2 databases. This only works because they are on the same Db server. I had to fully qualify the Db names in the stored procedure. I could then use 1 EF Model as a return type for the 2 queries. That Model still is not usable in another EDMX though, so if I need to connect to a different Db server, I still cannot share the Model. So the problem is not solved.
Here is image of current progress.
Function Import Mapping is for mapping stored procedure / function calls to EF code. It's not really relevant here, unless you're using stored procs (which is not the way to go 90% of the time with EF - only use stored procs for more complex procedures).
An EF context, by its very nature, can only have a single database associated with it. You need to create multiple contexts in order to access multiple databases at once.
What I would do in your case is create a database-first schema (.edmx) file for each database, then write a service layer abstraction above it that allows you to flatten the data into your expected model. This is the kind of thing I do all the time, regardless of how many databases I'm working on at once. You've almost outlined this in your first diagram. The service layer may have multiple classes (for example, for a blog website you might have BlogService, UserService, CommentService etc), each of which contain methods that you call from you application layer.
I've put a quick diagram together that might help to explain
http://www.gliffy.com/go/publish/image/4818386/L.png
The service layer does all of your EF work, and your application layer (or business layer, whatever you want to call it) will do all of your business logic.
This setup lends itself well to TDD and Dependency Injection / IoC. Everything is neat and nicely separated.

Strategies for replacing legacy data layer with Entity framework and POCO classes

We are using .net C# 4.0, VS 2010, EF 4.1 and legacy code in this project we are working on.
I'm working on a win form project where I have made a decision to start using entity framework 4.1 for accessing an ms sql db. The code base is quite old and we have an existing data layer that uses data adapters. These data adapters are used all over the place (in web apps and win form apps) My plan is to replace the old db access code with EF over time and get rid for the tight coupling between UI layers and data layer.
So my idea is to more or less combine EF with the legacy data access layer and slowly replace the legacy data layer with a more modern take on things using EF. So for now we need to use both EF and the legacy db access code.
What I have done so far is to add a project containing the edmx file and context. The edmx is generated using database first approach. I have also added another project that contains the POCO classes (by using ADO.NET POCO Entity Generator). I have more or less followed Julia Lerman's approach in her book "Programming Entity Framework" on how to split the model and the generated POCO classes. The database model has been set for years and it's not an option the change the table and the relationships, triggers, stored procedures, etc, so I'm basically stuck with the db model as it is.
I have read about the repository pattern and unit of work and I kind of like the patterns, but I struggle to implement them when I have both EF and the legacy db access code to deal with. Specially when I don't have the time to replace all of the legacy db access code with a pure EF implementation. In an perfect world I would start all over again with a fresh take one the data model, but that is not an option here.
Is the repository and unit of work patterns the way to go here? In order to use the POCO classes in my business layer, I sometimes need to use both EF and the legacy db code to populate my POCO classes. In another words, I can sometimes use EF to retrieve a part of the data I need and the use the old db access layer to retrieve the rest of the data and then map the data to my POCO classes. When I want to update some data I need to pick data from the POCO classes and use the legacy data access code to store the data in the database. So I need to map the data retrieved from the legacy data access layer to my POCO classes when I want to display the data in the UI and vice versa when I want to save data to the data base.
To complicate things we store some data in tables that we don't know the name of before runtime (Please don't ask me why:-) ). So in the old db access layer, we had to create sql statements on the fly where we inserted the table and column names based on information from other tables.
I also find that the relationships between the POCO classes are somewhat too data base centric. In another words, I feel that I need to have a more simplified domain model to work with. Perhaps I should create a domain model that fits the bill and then use the POCO classes as "DAO's" to populate the domain model classes?
How would you implement this using the Repository pattern and Unit of Work pattern? (if that is the way to go)
Alarm bells are ringing for me! We tried to do something similar a while ago (only with nHibernate not EF4). We had several problems running ADO.NET along side an ORM - database concurrency being a big one.
The database model has been set for
years and it's not an option the
change the table and the
relationships, triggers, stored
procedures, etc, so I'm basically
stuck with the db model as it is.
Yep. Same thing! The problem was that our stored procs contained a lot of business logic and weren't simple CRUD procs so keeping the ORM updated with the various updates performed by a stored procedure was not easy at all - Single Responsibility Principle - not a good one to break!
My plan is to replace the old db
access code with EF over time and get
rid for the tight coupling
between UI layers and data layer.
Maybe you could decouple without the need for an ORM - how about putting a service/facade layer infront of your UI layer to coordinate all interactions with the underlying domain and hide it from the UI.
If your database is 'king' and your app is highly data driven I think you will always be fighting an uphill battle implementing the patterns you mention.
Embrace ado.net for this project - use EF4 and DDD patterns on your next green field proj :)
EDMX + POCO class generator results in EFv4 code, not EFv4.1 code but you don't have to bother with these details. EFv4.1 offers just different API which does exactly the same (and it is only wrapper around EFv4 API).
Depending on the way how you use datasets you can reach some very hard problems. Datasets are representation of the change set pattern. They know what changes were done to data and they are able to store just these changes. EF entities know this only if they are attached to the context which loaded them from the database. Once you work with detached entities you must make a big effort to tell EF what has changed - especially when modifying relations (detached entities are common scenario in web applications and web services). For those purposes EF offers another template called Self-tracking entities but they have another problems and limitations (for example missing lazy loading, you cannot apply changes when entity with the same key is attached to the context, etc.).
EF also doesn't support several features used in datasets - for example unique keys and batch updates. It's fun that newer MS APIs usually solve some pains of previous APIs but in the same time provide much less features then previous APIs which introduces new pains.
Another problem can be with performance - EF is slower then direct data access with datasets and have higher memory consumption (and yes there are some memory leaks reported).
You can forget about using EF for accessing tables which you don't know at design time. EF doesn't allow any dynamic behavior. Table names and the type of database server are fixed in mapping. Another problems can be with the way how you use triggers - ORM tools don't like triggers and EF has limited features when working with database computed values (possibility to fill value in the database or in the application is disjunctive).
The way of filling POCOs from EF + Datasets sounds like this will not be possible when using only EF. EF has some allowed mapping patterns but possibilities to map several tables to single POCO class are extremely limited and constrained (if you want to have these tables editable). If you mean just loading one entity from EF and another entity from data adapter and just make reference between them you should be OK - in this scenario repository sounds like reasonable pattern because the purpose of the repository is exactly this: load or persist data. Unit of work can be also usable because you will most probably want to reuse single database connection between EF and data adapters to avoid distributed transaction during saving changes. UoW will be the place responsible for handling this connection.
EF mapping is related to database design - you can introduce some object oriented modifications but still EF is closely dependent on the database. If you want to use some advanced domain model you will probably need separate domain classes filled from EF and datasets. Again it will be responsibility of repository to hide these details.
From how much we have implemented, I have learned following things.
POCO and Self Tracking objects are difficult to deal with, as if you do not have easy understanding of what goes inside, there will be number of unexpected behavior which may have worked well in your previous project.
Changing pattern is not easy, so far we have been managing simple CRUD without unit of work and identity map pattern. Now lot of legacy code that we wrote in past does not consider these new patterns and the logic will not work correctly.
In our previous code, we were simply using transactions and single insert/update/delete statement that was directly sent to database assuming transactions on server side will take care of all operations.
In such conditions, we were directly dealing with IDs all the time, newly generated IDs were immediately available after single insert statement, however this is not case with EF.
In EF, we are not dealing with IDs, we are dealing with navigation properties, which is a huge change from earlier ADO.NET programming methods.
From our experience we found that only replacing EF with earlier data access code will result in chaos. But EF + RIA Services offer you a completely new solution where you will probably get everything you need and your UI will very easily bind to it. So if you are thinking about complete rewriting using UI + RIA Services + EF, then it is worth, because lot of dependency in query management reduces automatically. You will be focusing only on business logic, but this is a big decision and the amount of man hours required in complete rewriting or just replacing EF is almost same.
So we went UI + RIA Services + EF way, and we started replacing one one module. Mostly EF will easily co-exist with your existing infrastructure so there is no harm.

Design Patterns: Need help understand to this concept and how it applies to my project

The past few days I have done a lot of research using the DAL/BLL/UI approach without a very clear understanding of how it will apply to my project. In the past, I have left out the BLL connecting my UI directly to the Data Access Layer (LINQtoSQL dbml). But, I do not think this is a good idea where I work now(or maybe even in the past) because we have a lot of different applications and I'd like to use the same DAL/BLL as they are built.
My question is, how does the BLL help me when, in most of my applications, all I really do is use the LinqtoSqlDataSource/GridView to connect to my datacontext to take care of all the updating/edit, etc. Also, each new web application will, at some level, require unique changes to the DAL/BLL to get the require data, possibly affecting other apps using the same DAL/BLL. Is this reuse of the DAL/BLL the right way of doing this or am I missing something?
I think the BLL comes in when I need to build, for example, a security classes for the various web applications that will be built. But, when I use the Linqtosqldatasource, why would I bother to connect it to the BLL?
DAL
LinqToSQL dbml DataContext.
Does using LinqToSQL change how I should use this design?
BLL
Security for various website used by company.
Query DAL return what(?) when using LinqToSQLDatSource., functions that handle various results sets(I am really unsure how this should work with BLL, sorry if the question is unclear)
UI
Reference only the BLL?
The DAL and BLL are separated by one often subtle, but key difference; business logic. Sounds moronically simple, but let me explain further because the distinctions can be VERY fine and yet impact architecture in huge ways.
Using Linq2SQL, the framework generates very simple objects that will each represent one record in one table. These objects are DTOs; they are lightwight, POCO (Plain Ol' CLR Object) classes that have only fields. The Linq2SQL framework knows how to instantiate and hydrate these objects from DB data, and similarly it can digest the data contained in one into SQLDML that creates or updates the DB record. However, few or none of the rules governing the relationship between fields of various objects are known at this level.
Your actual domain model should be smarter than this; at least smart enough to know that a property on an Order object named SubTotal should be equal to the sum of all ExtendedCosts of all OrderLines, and similarly, ExtendedCost should be the product of the UnitPrice and the Quantity. In many modern programs, your domain forms part of your BLL, at least to this extent. The objects created by Linq2SQL probably shouldn't have to know all this, especially if you aren't persisting SubTotal or ExtendedCost. If you rely on the Linq2SQL DTOs, you've basically tied yourself to what's called an Anemic Domain Model, which is a known anti-pattern. If the domain object can't keep itself internally consistent at least, then any object that works with the domain object must be trusted to keep it that way, requiring all those objects to know rules they shouldn't have to.
The UI should know about the domain, or if you prefer it should know some abstracted way to get the data from the domain for read-write purposes (generally encapsulated in objects called Controllers, which work with the domain layer and/or Linq2SQL). The UI should NOT have to know about the DB in any program of moderate size or larger; either the domain objects can hydrate themselves with a reference to objects in the DAL, or they are produced by custom objects in the DAL that you create to do the hydration, which are then given to the controller. The connected ADO model and interop with GridViews is admirable, but it doesn't allow for abstraction. Say you wanted to insert a web service layer in between the domain and UI, to allow the UI to be located on a mobile app that worked with data in your warehouse. You'd have to rebuild your UI, because you can no longer get objects from Linq2SQL directly; you get them from the web services. If you had a Controller layer that talked to Linq2SQL, you could replace that layer with controllers that talked to the web services. It sounds like a minor difference; you always have to change something. But, now you're using EXACTLY the same UI on the mobile and desktop apps, so changes at THAT layer don't have to be made twice just because the two layers get data different ways.
This is a great question that I have been mulling over with our catalog app for a year now. A specific instance for me might help you with the pattern.
I have a page to display the contents of a shopping cart. In the 'early days' this page had a grid populated by the results of a SQL stored procedure that, given the order number, listed the items in the cart.
Now I have a 'cart' BLL object which contains a collection of 'row' objects. The grid is the same, but the data source is the cart's rows.
Why did I do this? Initially, not becuase of any fancy design patterns. I had so many special cases to handle based on fields in each row AND I had other places I needed to show the same cart-content data, it just made more sense to build the objects. Now a cart loads from a repository and my pages have no idea what that repository does. Heck, for testing, it's hard-coded cart data.
The cart then uses a repository to load up the rows. Each row has logic to maniuplate itself, not knowing where the data came from.
Hopefully that helps?

Adding custom navigation properties using SPs in .NET Entity Framework 4

My company uses stored procs for all SELECT operations so it's making it rather difficult for me to create sensible navigation properties. I'm not too concerned at this point whether they're lazy loaded or not.
So for example I created an entity for Customer then created a FunctionImport to map GetAllCustomersSP to return a collection of Customer entities. But I want a navigation property "Orders" on each Customer entity.
But if I use the Customer entity partial class to just add this property, the problem is that I don't have access to the original Context, so I can't call the GetCustomerOrdersSP either explicitly or deferred.
The only option I can see is to modify my repository to add these properties in explicitly, which seems lame because it puts the entity logic into the repository.
Is there something I'm missing here? I can see in the entity model designer that I can specify custom insert, update, delete SPs but I don't see any way to use select SPs to actually retrieve the data.
I agree with Tim here...any solution you come up with isn't going to fully leverage the ORM and will be a potential nightmare to maintain. I would suggest creating a model, in code, that is framed in the manner in which you want to develop.
In the Data Access layer of your app, you can map your data objects that use SPs to hydrate your model objects (have a look at AutoMapper). Your app will only know about your model objects.
Doing this will give you a consistency in how you interact with the objects, and you can begin to apply pressure to the powers that be to allow more fine grained access to the tables, at which point, you can adjust your Data Access Layer to support EF and remove SPs. At this point you would be able to consider migrating the objects you created to POCO objects that are persisted via EF.
We had a similar issue in that granting raw DB access was "forbidden". We overcame this problem by using a model in which we only grant access to tables as they are used, not the entire database, and by ensuring the DBA that EF uses parameterized SQL, eliminating the concern of SQL Injection.

Categories