Its probarbly a simple 3-tier problem. I just want to make sure we use the best practice for this and I am not that familiary with the structures yet.
We have the 3 tiers:
GUI: ASP.NET for Presentation-layer (first platform)
BAL: Business-layer will be handling the logic on a webserver in C#, so we both can use it for webforms/MVC + webservices
DAL: LINQ to SQL in the Data-layer, returning BusinessObjects not LINQ.
DB: The SQL will be Microsoft SQL-server/Express (havent decided yet).
Lets think of setup where we have a database of [Persons]. They can all have multiple [Address]es and we have a complete list of all [PostalCode] and corresponding citynames etc.
The deal is that we have joined a lot of details from other tables.
{Relations}/[tables]
[Person]:1 --- N:{PersonAddress}:M --- 1:[Address]
[Address]:N --- 1:[PostalCode]
Now we want to build the DAL for Person. How should the PersonBO look and when does the joins occure?
Is it a business-layer problem to fetch all citynames and possible addressses pr. Person? or should the DAL complete all this before returning the PersonBO to the BAL ?
Class PersonBO
{
public int ID {get;set;}
public string Name {get;set;}
public List<AddressBO> {get;set;} // Question #1
}
// Q1: do we retrieve the objects before returning the PersonBO and should it be an Array instead? or is this totally wrong for n-tier/3-tier??
Class AddressBO
{
public int ID {get;set;}
public string StreetName {get;set;}
public int PostalCode {get;set;} // Question #2
}
// Q2: do we make the lookup or just leave the PostalCode for later lookup?
Can anyone explain in what order to pull which objects? Constructive criticism is very welcome. :o)
You're kind of reinventing the wheel; ORMs already solve most of this problem for you and you're going to find it a little tricky to do yourself.
The way ORMs like Linq to SQL, Entity Framework and NHibernate do this is a technique called lazy loading of associations (which can optionally be overriden with an eager load).
When you pull up a Person, it does not load the Address until you specifically ask for it, at which point another round-trip to the database occurs (lazy load). You can also specify on a per-query basis that you want the Address to be loaded for every person (eager load).
In a sense, with this question you are basically asking whether or not you should perform lazy or eager loads of the AddressBO for the PersonBO, and the answer is: neither. There isn't one single approach that universally works. By default you should probably lazy load, so that you don't do a whole lot of unnecessary joins; in order to pull this off, you'll have to build your PersonBO with a lazy-loading mechanism that maintains some reference to the DAL. But you'll still want to have the option to eager-load, which you'll need to build into your "business-access" logic.
Another option, if you need to return a highly-customized data set with specific properties populated from many different tables, is to not return a PersonBO at all, but instead use a Data Transfer Object (DTO). If you implement a default lazy-loading mechanism, you can sometimes substitute this as the eager-loading version.
FYI, lazy loaders in data access frameworks are usually built with the loading logic in the association itself:
public class PersonBO
{
public int ID { get; set; }
public string Name { get; set; }
public IList<AddressBO> Addresses { get; set; }
}
This is just a POCO, the magic happens in the actual list implementation:
// NOT A PRODUCTION-READY IMPLEMENTATION - DO NOT USE
internal class LazyLoadList<T> : IList<T>
{
private IQueryable<T> query;
private List<T> items;
public LazyLoadList(IQueryable<T> query)
{
if (query == null)
throw new ArgumentNullException("query");
this.query = query;
}
private void Materialize()
{
if (items == null)
items = query.ToList();
}
public void Add(T item)
{
Materialize();
items.Add(item);
}
// Etc.
}
(This obviously isn't production-grade, it's just to demonstrate the technique; you start with a query and don't materialize the actual list until you have to.)
Related
I`m in process of learning C# & .NET and EF (with aspnetboilerplate) and I came up with idea to create some dummy project so I can practice. But last 4 hour Im stuck with this error and hope someone here can help me.
What I create( well at least I think I create it correctly ) is 2 class called "Ingredient" and "Master"
I want to use it for categorize Ingredient with "Master" class.
For example ingredient like
Chicken breast
chicken drumstick
Both of them belong to Meat ( witch is input in "Master" database ) and here is my code
Ingredient.cs
public class Ingrident : Entity
{
public string Name { get; set; }
public Master Master { get; set; }
public int MasterId { get; set; }
}
Master.cs
public class Master : Entity
{
public string Name { get; set; }
public List<Ingrident> Ingridents { get; set; } = new();
}
IngridientAppService.cs
public List<IngridientDto> GetIngWithParent()
{
var result = _ingRepository.GetAllIncluding(x => x.Master);
//Also I try this but doesn`t work
// var result = _ingRepository.GetAll().Where(x => x.MasterId == x.Master.Id);
return ObjectMapper.Map<List<IngridientDto>>(result);
}
IngridientDto.cs
[AutoMap(typeof(IndexIngrident.Entities.Ingrident))]
public class IngridientDto : EntityDto
{
public string Name { get; set; }
public List<MasterDto> Master { get; set; }
public int MasterId { get; set; }
}
MasterDto.cs
[AutoMap(typeof(IndexIngrident.Entities.Master))]
public class MasterDto : EntityDto
{
public string Name { get; set; }
}
When I created ( for last practice ) M -> M relationship this approach with .getAllIncluding work but now when I have One -> Many it won`t work.
Hope someone will be able to help me or at least give me some good hint.
Have a nice day !
Straight up the examples you are probably referring to (regarding the repository etc.) are overcomplicated and for most cases, not what you'd want to implement.
The first issue I see is that while your entities are set up for a 1-to-many relationship from Master to Ingredients, your DTOs are set up from Ingredient to Masters which definitely won't map properly.
Start with the simplest thing. Get rid of the Repository and get rid of the DTOs. I'm not sure what the base class "Entity" does, but I'm guessing it exposes a common key property called "Id". For starters I'd probably ditch that as well. When it comes to primary keys there are typically two naming approaches, every table uses a PK called "Id", or each table uses a PK with the TableName suffixed with "Id". I.e. "Id" vs. "IngredientId". Personally I find the second option makes it very clear when pairing FKs and PKs given they'd have the same name.
When it comes to representing relationships through navigation properties one important detail is ensuring navigation properties are linked to their respective FK properties if present, or better, use shadow properties for the FKs.
For example with your Ingredient table, getting rid of the Entity base class:
[Table("Ingredients")]
public class Ingredient : Entity
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int IngredientId { get; set; }
public string Name { get; set; }
public int MasterId { get; set; }
[ForeignKey("MasterId")]
public virtual Master Master { get; set; }
}
This example uses EF attributes to aid in telling EF how to resolve the entity properties to respective tables and columns, as well as the relationship between Ingredient and Master. EF can work much of this out by convention, but it's good to understand and apply it explicitly because eventually you will come across situations where convention doesn't work as you expect.
Identifying the (Primary)Key and indicating it is an Identity column also tells EF to expect that the database will populate the PK automatically. (Highly recommended)
On the Master side we do something similar:
[Table("Masters")]
public class Master : Entity
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int MasterId { get; set; }
public string Name { get; set; }
[InverseProperty("Master")]
public virtual ICollection<Ingredient> Ingredients { get; set; } = new List<Ingredient>();
}
Again we denote the Primary Key, and for our Ingredients collection, we tell EF what property on the other side (Ingredient) it should use to associate to this Master's list of Ingredients using the InverseProperty attribute.
Attributes are just one option to set up the relationships etc. The other options are to use configuration classes that implement IEntityConfiguration<TEntity> (EF Core), or to configure them as part of the OnModelCreating event in the DbContext. That last option I would only recommend for very small projects as it can start to become a bit of a God method quickly. You can split it up into calls to various private methods, but you may as well just use IEntityConfiguration classes then.
Now when you go to fetch Ingredients with it's Master, or a Master with its Ingredients:
using (var context = new AppDbContext())
{
var ingredients = context.Ingredients
.Include(x => x.Master)
.Where(x => x.Master.Name.Contains("chicken"))
.ToList();
// or
var masters = context.Master
.Include(x => x.Ingredients)
.Where(x => x.Name.Contains("chicken"))
.ToList();
// ...
}
Repository patterns are a more advanced concept that have a few good reasons to implement, but for the most part they are not necessary and an anti-pattern within EF implementations. I consider Generic repositories to always be an anti-pattern for EF implementations. I.e. Repository<Ingredient> The main reason not to use repositories, especially Generic repositories with EF is that you are automatically increasing the complexity of your implementation and/or crippling the capabilities that EF can bring to your solution. As you see from working with your example, simply getting across an eager load through to the repository means writing in complex Expression<Func<TEntity>> parameters, and that just covers eager loading. Supporting projection, pagination, sorting, etc. adds even more boiler-plate complexity or limits your solution and performance without these capabilities that EF can provide out of the box.
Some good reasons to consider studying up on repository implementations /w EF:
Facilitate unit testing. (Repositories are easier to mock than DbContexts/DbSets)
Centralizing low-level data rules such as tenancy, soft deletes, and authorization.
Some bad (albeit very common) reasons to consider repositories:
Abstracting code from references or knowledge of the dependency on EF.
Abstracting the code so that EF could be substituted out.
Projecting to DTOs or ViewModels is an important aspect to building efficient and secure solutions with EF. It's not clear what "ObjectMapper" is, whether it is an Automapper Mapper instance or something else. I would highly recommend starting to grasp projection by using Linq's Select syntax to fill in a desired DTO from the models. The first key difference when using Projection properly is that when you project an object graph, you do not need to worry about eager loading related entities. Any related entity / property referenced in your projection (Select) will automatically be loaded as necessary. Later, if you want to leverage a tool like Automapper to help remove the clutter of Select statements, you will want to configure your mapping configuration then use Automapper's ProjectTo method rather than Map. ProjectTo works with EF's IQueryable implementation to resolve your mapping down to the SQL just like Select does, where Map would need to return everything eager loaded in order to populate related data. ProjectTo and Select can result in more efficient queries that can better take advantage of indexing than Eager Loading entire object graphs. (Less data over the wire between database and server/app) Map is still very useful such as scenarios where you want to copy values back from a DTO into a loaded entity.
Do it like this
public class Ingrident:Entity
{
public string Name { get; set; }
[ForeignKey(nameof(MasterId))]
public Master Master { get; set; }
public int MasterId { get; set; }
}
I have 2 tables that saved family members, when I use include to retrieve the family members, the generated T-SQL is what I'm expected, but when I see the result from VS, like image below, it's look like never ending.
My questions:
It's this normal?
Should I avoid include when the relationship becomes complex?
If it is normal, will this very memory consumption?
POCO
public class Cust_ProfileTbl
{
[Key]
public long bintAccountNo { get; set; }
public string nvarCardName { get; set; }
public string varEmail { get; set; }
public virtual ICollection<Cust_ProfileFamilyTbl> profileFamilyParents { get; set; }
public virtual ICollection<Cust_ProfileFamilyTbl> profileFamilyChildren { get; set; }
}
public class Cust_ProfileFamilyTbl
{
[Key]
public int intProfileFamily { get; set; }
public long bintAccountNo { get; set; }
public long bintAccountNoMember { get; set; }
public virtual Cust_ProfileTbl custProfileParent { get; set; }
public virtual Cust_ProfileTbl custProfileChild { get; set; }
}
LINQ
var rs = from family in context.member.Include("profileFamilyParents.custProfileChild")
select family;
rs = rs.Where(x => x.bintAccountNo.Equals(1));
var result = rs.ToList();
In onModelCreating
modelBuilder.Entity<Cust_ProfileFamilyTbl>()
.HasRequired(m => m.custProfileParent)
.WithMany(t => t.profileFamilyParents)
.HasForeignKey(m => m.bintAccountNo)
.WillCascadeOnDelete(false);
modelBuilder.Entity<Cust_ProfileFamilyTbl>()
.HasRequired(m => m.custProfileChild)
.WithMany(t => t.profileFamilyChildren)
.HasForeignKey(m => m.bintAccountNoMember)
.WillCascadeOnDelete(false);
When people use an ORM like EF in their application, many times the application design gets driven by this ORM and the entities defined in its model. When the app is a simple "CRUD" application, that's not a problem, but an advantage, because you spare a lot of time.
However when things start to get more complicated, an "ORM guided design" becomes a problem. This looks to be the case.
There are at least two problems, recovered from the comments:
the data retrieved from the DB is more than needed
in this case, because of some particular relationships between entities, there is a circular reference, which creates an endless loop and a stack overflow when trying to show the model in the view
When this kind of situation shows up, the most advisable is to break the tight tie between the ORM and the rest of the app, which can be dine by defining a new class, and projecting the data into it. Let's give a generic ProfileDto name.
public class ProfileDto { ... }
DTO is a generic name for this kind of classes: Data Transfer Objects - but, when they have specific purposes, they can get other names like view models, when they're going to be used as the model sent to an MVC view
And then, what you need to do is to project the result of the query into the DTO:
var model = theQuery.Select(i => new ProfileDto { a = i.a, b = i.b...}).ToList();
With a good design of the Dto you'll only recover the needed data from the DB, and you'll avoid the loop problem (by not including the navigation property that creates the loop).
NOTE: many times people uses mappers, like AutoMapper or ValueInjecter to make the mapping, or part of the mapping, automatic
Code standardization is a very good idea until it becomes a source of problems. The main purpose of writing code is implementing the business logic. If code standardization, technology, or whatever, makes it harder to implement business logic, instead of contributing to the solution, they become a problem, so you need to avoid them.
Mapping you created is Normal but use of Include depends upon its usage
Use of Include depends on situation of use for example if you want to cache it in memory then you may use include, Where as if you are using only showing properties of Cust_ProfileTbl
class in some grid and on click you want show details of Cust_ProfileFamilyTbl then you might don't want to use include. But be careful if you are using Automapper or something because when It will try to map related properties it will query database.
It will consume memeory when you execute ToList() as doing so you are Loading query result into List collection. Where as If you again want to query the result then you can use ToQueryable() or just want to iterate the you can don't load them to List.
I'm using Code First in EF. Let's say I have two entities:
public class Farm
{
....
public virtual ICollection<Fruit> Fruits {get; set;}
}
public class Fruit
{
...
}
My DbContext is something like this:
public class MyDbContext : DbSet
{
....
private DbSet<Farm> FarmSet{get; set;}
public IQueryable<Farm> Farms
{
get
{
return (from farm in FarmSet where farm.owner == myowner select farm);
}
}
}
I do this so that each user can only see his farms, and I don't have to call the Where on each query to the db.
Now, I want to filter all the fruits from one farm, I tried this (in Farm class):
from fruit in Fruits where fruit .... select fruit
but the query generated doesn't include the where clause, which is very important because I have dozens of thousands of rows and is not efficient to load them all and filter them when they're Objects.
I read that lazy loaded properties get filled the first time they're accessed but they read ALL the data, no filters can be applied UNLESS you do something like this:
from fruits in db.Fruits where fruit .... select fruit
But I can't do that, because Farm has no knowledge of DbContext (I don't think it should(?)) but also to me it just loses the whole purpose of using navigation properties if I have to work with all the data and not just the one that belongs to my Farm.
So,
am I doing anything wrong / making wrong assumptions?
Is there any way I can apply a filter to a navigation property that gets generated to the real query? (I'm working with a lot of data)
Thank you for reading!
Unfortunately, I think any approach you might take would have to involve fiddling with the context, not just the entity. As you've seen, you can't filter a navigation property directly, since it's an ICollection<T> and not an IQueryable<T>, so it gets loaded all at once before you have a chance to apply any filters.
One thing you could possibly do is to create an unmapped property in your Farm entity to hold the filtered fruit list:
public class Farm
{
....
public virtual ICollection<Fruit> Fruits { get; set; }
[NotMapped]
public IList<Fruit> FilteredFruits { get; set; }
}
And then, in your context/repository, add a method to load a Farm entity and populate FilteredFruits with the data you want:
public class MyDbContext : DbContext
{
....
public Farm LoadFarmById(int id)
{
Farm farm = this.Farms.Where(f => f.Id == id).Single(); // or whatever
farm.FilteredFruits = this.Entry(farm)
.Collection(f => f.Fruits)
.Query()
.Where(....)
.ToList();
return farm;
}
}
...
var myFarm = myContext.LoadFarmById(1234);
That should populate myFarm.FilteredFruits with only the filtered collection, so you could use it the way you want within your entity. However, I haven't ever tried this approach myself, so there may be pitfalls I'm not thinking of. One major downside is that it would only work with Farms you load using that method, and not with any general LINQ queries you perform on the MyDbContext.Farms dataset.
All that said, I think the fact that you're trying to do this might be a sign that you're putting too much business logic into your entity class, when really it might belong better in a different layer. A lot of the time, it's better to treat entities basically as just receptacles for the contents of a database record, and leave all the filtering/processing to the repository or wherever your business/display logic lives. I'm not sure what kind of application you're working on, so I can't really offer any specific advice, but it's something to think about.
A very common approach if you decide to move things out the Farm entity is to use projection:
var results = (from farm in myContext.Farms
where ....
select new {
Farm = farm,
FilteredFruits = myContext.Fruits.Where(f => f.FarmId == farm.Id && ...).ToList()
}).ToList();
...and then use the generated anonymous objects for whatever you want to do, rather than trying to add extra data to the Farm entities themselves.
Just figured I'd add another solution to this having spent some time trying to append DDD principles to code first models. After searching around for some time I found a solution like the one below which works for me.
public class FruitFarmContext : DbContext
{
public DbSet<Farm> Farms { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Farm>().HasMany(Farm.FruitsExpression).WithMany();
}
}
public class Farm
{
public int Id { get; set; }
protected virtual ICollection<Fruit> Fruits { get; set; }
public static Expression<Func<Farm, ICollection<Fruit>>> FruitsExpression = x => x.Fruits;
public IEnumerable<Fruit> FilteredFruits
{
get
{
//Apply any filter you want here on the fruits collection
return Fruits.Where(x => true);
}
}
}
public class Fruit
{
public int Id { get; set; }
}
The idea is that the farms fruit collection is not directly accessible but is instead exposed through a property that pre-filters it.
The compromise here is the static expression that is required to be able to address the fruit collection when setting up mapping.
I've started to use this approach on a number of projects where I want to control the access to an objects child collections.
Lazy loading doesn't support filtering; use filtered explicit loading instead:
Farm farm = dbContext.Farms.Where(farm => farm.Owner == someOwner).Single();
dbContext.Entry(farm).Collection(farm => farm.Fruits).Query()
.Where(fruit => fruit.IsRipe).Load();
The explicit loading approach requires two round trips to the database, one for the master and one for the detail. If it is important to stick to a single query, use a projection instead:
Farm farm = (
from farm in dbContext.Farms
where farm.Owner == someOwner
select new {
Farm = farm,
Fruit = dbContext.Fruit.Where(fruit => fruit.IsRipe) // Causes Farm.Fruit to be eager loaded
}).Single().Farm;
EF always binds navigation properties to their loaded entities. This means that farm.Fruit will contain the same filtered collection as the Fruit property in the anonymous type. (Just make sure you haven't loaded into the context any Fruit entities that should be filtered out, as described in Use Projections and a Repository to Fake a Filtered Eager Load.)
My application has be entity model as below and use Dapper
public class Goal
{
public string Text { get; set; }
public List<SubGoal> SubGoals { get; set; }
}
public class SubGoal
{
public string Text { get; set; }
public List<Practise> Practices { get; set; }
public List<Measure> Measures { get; set; }
}
and has a repository as below
public interface IGoalPlannerRepository
{
IEnumerable<Goal> FindAll();
Goal Get(int id);
void Save(Goal goal);
}
I came across two scenarios as below
While retrieving data (goal entity), it needs to retrieve all the related objects in hierarchy (all subgoals along with practices and measures)
When a goal is saved all the related data need to be inserted and/or updated
Please suggest is there a better way to handle these scenarios other than "looping through" the collections and writing lots and lots of SQL queries.
The best way to do large batch data updates in SQL using Dapper is with compound queries.
You can retrieve all your objects in one query as a multiple resultset, like this:
CREATE PROCEDURE get_GoalAndAllChildObjects
#goal_id int
AS
SELECT * FROM goal WHERE goal_id = #goal_id
SELECT * FROM subgoals WHERE goal_id = #goal_id
Then, you write a dapper function that retrieves the objects like this:
using (var multi = connection.QueryMultiple("get_GoalAndAllChildObjects", new {goal_id=m_goal_id})) {
var goal = multi.Read<Goal>();
var subgoals = multi.Read<SubGoal>();
}
Next comes updating large data in batches. You do that through table parameter inserts (I wrote an article on this here: http://www.altdevblogaday.com/2012/05/16/sql-server-high-performance-inserts/ ). Basically, you create one table for each type of data you are going to insert, then write a procedure that takes those tables as parameters and write them to the database.
This is super high performance and about as optimized as you can get, plus the code isn't too complex.
However, I need to ask: is there any point to keeping "subgoals" and all the other objects relational? One easy alternative is to create an XML or JSON document that contains your goal and all its child objects serialized into text, and just save that object to the file system. It's unbelievably high performance, very simple, very extensible, and takes very little code. The only downside is that you can't write a SQL statement to browse across all subgoals with a bit of work. Consider it - it might be worth a thought ;)
I have following repository. I have a mapping between LINQ 2 SQL generated classes and domain objects using a factory.
The following code will work; but I am seeing two potential issues
1) It is using a SELECT query before update statement.
2) It need to update all the columns (not only the changed column). This is because we don’t know what all columns got changed in the domain object.
How to overcome these shortcomings?
Note: There can be scenarios (like triggers) which gets executed based on specific column update. So I cannot update a column unnecessarily.
REFERENCE:
LINQ to SQL: Updating without Refresh when “UpdateCheck = Never”
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=113917
CODE
namespace RepositoryLayer
{
public interface ILijosBankRepository
{
void SubmitChangesForEntity();
}
public class LijosSimpleBankRepository : ILijosBankRepository
{
private IBankAccountFactory bankFactory = new MySimpleBankAccountFactory();
public System.Data.Linq.DataContext Context
{
get;
set;
}
public virtual void SubmitChangesForEntity(DomainEntitiesForBank.IBankAccount iBankAcc)
{
//Does not get help from automated change tracking (due to mapping)
//Selecting the required entity
DBML_Project.BankAccount tableEntity = Context.GetTable<DBML_Project.BankAccount>().SingleOrDefault(p => p.BankAccountID == iBankAcc.BankAccountID);
if (tableEntity != null)
{
//Setting all the values to updates (except primary key)
tableEntity.Status = iBankAcc.AccountStatus;
//Type Checking
if (iBankAcc is DomainEntitiesForBank.FixedBankAccount)
{
tableEntity.AccountType = "Fixed";
}
if (iBankAcc is DomainEntitiesForBank.SavingsBankAccount)
{
tableEntity.AccountType = "Savings";
}
Context.SubmitChanges();
}
}
}
}
namespace DomainEntitiesForBank
{
public interface IBankAccount
{
int BankAccountID { get; set; }
double Balance { get; set; }
string AccountStatus { get; set; }
void FreezeAccount();
}
public class FixedBankAccount : IBankAccount
{
public int BankAccountID { get; set; }
public string AccountStatus { get; set; }
public double Balance { get; set; }
public void FreezeAccount()
{
AccountStatus = "Frozen";
}
}
}
If I understand your question, you are being passed an entity that you need to save to the database without knowing what the original values were, or which of the columns have actually changed.
If that is the case, then you have four options
You need to go back to the database to see the original values ie perform the select, as you code is doing. This allows you to set all your entity values and Linq2Sql will take care of which columns are actually changed. So if none of your columns are actually changed, then no update statement is triggered.
You need to avoid the select and just update the columns. You already know how to do (but for others see this question and answer). Since you don't know which columns have changed you have no option but set them all. This will produce an update statement even if no columns are actually changed and this can trigger any database triggers. Apart from disabling the triggers, about the only thing you can do here is make sure that the triggers are written to check the old and new columns values to avoid any further unnecessary updates.
You need to change your requirements/program so that you require both old and new entities values, so you can determine which columns have changed without going back to the database.
Don't use LINQ for your updates. LINQ stands for Language Integrated QUERY and it is (IMHO) brilliant at query, but I always looked on the updating/deleting features as an extra bonus, but not something which it was designed for. Also, if timing/performance is critical, then there is no way that LINQ will match properly hand-crafted SQL.
This isn't really a DDD question; from what I can tell you are asking:
Use linq to generate direct update without select
Where the accepted answer was no its not possible, but theres a higher voted answer that suggests you can attach an object to your context to initiate the change tracking of the data context.
Your second point about disabling triggers has been answered here and here. But as others have commented do you really need the triggers? Should you not be controlling these updates in code?
In general I think you're looking at premature optimization. You're using an ORM and as part of that you're trusting in L2S to make the database plumbing decisions for you. But remember where appropriate you can use stored procedures execute specific your SQL.