C# On property change, run logic - c#

I have a C# Entity which is auto generated via database first:
public partial class Zone
{
public Guid LocationId {get; set;}
...
}
What I need to do is run a function, Process() whenever LocationId is changed. Now normally I would alter the setter and job done, however because this is auto generated via database first, any "manual changes to the file will be overwritten if the code is regenerated."
What would be the best approach here?
The current thinking is to create a new partial class to do something like this:
public partial class Zone
{
private Guid _pendingLocationId
public Guid PendingLocationId {
get { return _pendingLocationId }
set {
Guid updatedLocation = Process(value) ?? value;
_pendingLocationId = updatedLocation;
locationId = updatedLocation;
}
}
}
Just a note; the unfortunate reality is that there is probably zero chance of us integrating a new framework or library into the application at this stage.
In response to the possible duplicate flag; Unless I have misread, this would require us re-mapping /encapsulating all of our Zone references into a new class, not only pushing this out to all the views, but also editing many linq queries etc. If someone can identify why this would be the preferred solution over my own suggested solve, then please let me know.

The least intrusive way to do this might be using AOP patterns, for instance, using PostSharp framework: less than 2 lines of code!
[NotifyPropertyChanged] //<---Add this attributes to the class
public class Zone
{
public Guid LocationId {get; set;}
...
}
To hook the changed event and add your own handler
//Zone instance;
((INotifyPropertyChanged) instance).PropertyChanged += ZoneOnPropertyChanged;
More details can be found here.
Update: the OP mentioned zero chance of integrating other library into the app, I am just curious, don't you use nuget? and what is the reason of this zero chance? In my personal view, you should, rather than NOT, to reinvent the wheels, if there is already a library which does the required features.
If licensing cost is the issue or it is overkill or to heavy to introduce this bulky library just for the sake of the problem, I think Fody, a free open source alternative to PostSharp can be considered. More specifically PropertyChanged.Fody package, which is very standalone, compact and light weight.

I would suggest using AutoMapper.
You can write another class with the same name and properties (with INPC), but in different namespace. Then, everytime you fetch database, you use Automapper to map the data into your notifiying class and everytime you save data to database you map it back.
That way you only need to change namespaces in code using your class and add code like this into your repository:
var dtos = args.Select(x => Mapper.Map<Zone>(x)).ToList();

Have a business entity mapped to yr database entity (via AutoMapper) and then in your business entity, incorporate the INotifyPropertyChanged interface.
Pseudo code below. This will de-couple your database from business entity and allow independent changes.
namespace DBEntity {
public class Customer {
public int Id { get; set; } ...
}
}
namespace BizEntity {
public class Customer : INotifyPropertyChanged {
private int id;
public int Id {
get { return this.id } ;
set {
this.id = value;
PropertyChanged(Id...);
}
}
NotifyPropertyChanged() {
....
}
var dbCustomer = GetCustomerFromDB()
var cust = AutoMapper.Mapper.Map<DBEntity.Customer, BizEntity.Customer>(dbCustomer);
// Update the property as per biz requirement
cust.Id = 53656; // Fires the Notification
Let me know if this helps.
Regarding AutoMapper as a new library, this will be a minimum change and there's no licensing or learning curve required here to allow fast integration.

Related

One to many relationship doesn`t retrieve data in entity framework

I`m in process of learning C# & .NET and EF (with aspnetboilerplate) and I came up with idea to create some dummy project so I can practice. But last 4 hour Im stuck with this error and hope someone here can help me.
What I create( well at least I think I create it correctly ) is 2 class called "Ingredient" and "Master"
I want to use it for categorize Ingredient with "Master" class.
For example ingredient like
Chicken breast
chicken drumstick
Both of them belong to Meat ( witch is input in "Master" database ) and here is my code
Ingredient.cs
public class Ingrident : Entity
{
public string Name { get; set; }
public Master Master { get; set; }
public int MasterId { get; set; }
}
Master.cs
public class Master : Entity
{
public string Name { get; set; }
public List<Ingrident> Ingridents { get; set; } = new();
}
IngridientAppService.cs
public List<IngridientDto> GetIngWithParent()
{
var result = _ingRepository.GetAllIncluding(x => x.Master);
//Also I try this but doesn`t work
// var result = _ingRepository.GetAll().Where(x => x.MasterId == x.Master.Id);
return ObjectMapper.Map<List<IngridientDto>>(result);
}
IngridientDto.cs
[AutoMap(typeof(IndexIngrident.Entities.Ingrident))]
public class IngridientDto : EntityDto
{
public string Name { get; set; }
public List<MasterDto> Master { get; set; }
public int MasterId { get; set; }
}
MasterDto.cs
[AutoMap(typeof(IndexIngrident.Entities.Master))]
public class MasterDto : EntityDto
{
public string Name { get; set; }
}
When I created ( for last practice ) M -> M relationship this approach with .getAllIncluding work but now when I have One -> Many it won`t work.
Hope someone will be able to help me or at least give me some good hint.
Have a nice day !
Straight up the examples you are probably referring to (regarding the repository etc.) are overcomplicated and for most cases, not what you'd want to implement.
The first issue I see is that while your entities are set up for a 1-to-many relationship from Master to Ingredients, your DTOs are set up from Ingredient to Masters which definitely won't map properly.
Start with the simplest thing. Get rid of the Repository and get rid of the DTOs. I'm not sure what the base class "Entity" does, but I'm guessing it exposes a common key property called "Id". For starters I'd probably ditch that as well. When it comes to primary keys there are typically two naming approaches, every table uses a PK called "Id", or each table uses a PK with the TableName suffixed with "Id". I.e. "Id" vs. "IngredientId". Personally I find the second option makes it very clear when pairing FKs and PKs given they'd have the same name.
When it comes to representing relationships through navigation properties one important detail is ensuring navigation properties are linked to their respective FK properties if present, or better, use shadow properties for the FKs.
For example with your Ingredient table, getting rid of the Entity base class:
[Table("Ingredients")]
public class Ingredient : Entity
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int IngredientId { get; set; }
public string Name { get; set; }
public int MasterId { get; set; }
[ForeignKey("MasterId")]
public virtual Master Master { get; set; }
}
This example uses EF attributes to aid in telling EF how to resolve the entity properties to respective tables and columns, as well as the relationship between Ingredient and Master. EF can work much of this out by convention, but it's good to understand and apply it explicitly because eventually you will come across situations where convention doesn't work as you expect.
Identifying the (Primary)Key and indicating it is an Identity column also tells EF to expect that the database will populate the PK automatically. (Highly recommended)
On the Master side we do something similar:
[Table("Masters")]
public class Master : Entity
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int MasterId { get; set; }
public string Name { get; set; }
[InverseProperty("Master")]
public virtual ICollection<Ingredient> Ingredients { get; set; } = new List<Ingredient>();
}
Again we denote the Primary Key, and for our Ingredients collection, we tell EF what property on the other side (Ingredient) it should use to associate to this Master's list of Ingredients using the InverseProperty attribute.
Attributes are just one option to set up the relationships etc. The other options are to use configuration classes that implement IEntityConfiguration<TEntity> (EF Core), or to configure them as part of the OnModelCreating event in the DbContext. That last option I would only recommend for very small projects as it can start to become a bit of a God method quickly. You can split it up into calls to various private methods, but you may as well just use IEntityConfiguration classes then.
Now when you go to fetch Ingredients with it's Master, or a Master with its Ingredients:
using (var context = new AppDbContext())
{
var ingredients = context.Ingredients
.Include(x => x.Master)
.Where(x => x.Master.Name.Contains("chicken"))
.ToList();
// or
var masters = context.Master
.Include(x => x.Ingredients)
.Where(x => x.Name.Contains("chicken"))
.ToList();
// ...
}
Repository patterns are a more advanced concept that have a few good reasons to implement, but for the most part they are not necessary and an anti-pattern within EF implementations. I consider Generic repositories to always be an anti-pattern for EF implementations. I.e. Repository<Ingredient> The main reason not to use repositories, especially Generic repositories with EF is that you are automatically increasing the complexity of your implementation and/or crippling the capabilities that EF can bring to your solution. As you see from working with your example, simply getting across an eager load through to the repository means writing in complex Expression<Func<TEntity>> parameters, and that just covers eager loading. Supporting projection, pagination, sorting, etc. adds even more boiler-plate complexity or limits your solution and performance without these capabilities that EF can provide out of the box.
Some good reasons to consider studying up on repository implementations /w EF:
Facilitate unit testing. (Repositories are easier to mock than DbContexts/DbSets)
Centralizing low-level data rules such as tenancy, soft deletes, and authorization.
Some bad (albeit very common) reasons to consider repositories:
Abstracting code from references or knowledge of the dependency on EF.
Abstracting the code so that EF could be substituted out.
Projecting to DTOs or ViewModels is an important aspect to building efficient and secure solutions with EF. It's not clear what "ObjectMapper" is, whether it is an Automapper Mapper instance or something else. I would highly recommend starting to grasp projection by using Linq's Select syntax to fill in a desired DTO from the models. The first key difference when using Projection properly is that when you project an object graph, you do not need to worry about eager loading related entities. Any related entity / property referenced in your projection (Select) will automatically be loaded as necessary. Later, if you want to leverage a tool like Automapper to help remove the clutter of Select statements, you will want to configure your mapping configuration then use Automapper's ProjectTo method rather than Map. ProjectTo works with EF's IQueryable implementation to resolve your mapping down to the SQL just like Select does, where Map would need to return everything eager loaded in order to populate related data. ProjectTo and Select can result in more efficient queries that can better take advantage of indexing than Eager Loading entire object graphs. (Less data over the wire between database and server/app) Map is still very useful such as scenarios where you want to copy values back from a DTO into a loaded entity.
Do it like this
public class Ingrident:Entity
{
public string Name { get; set; }
[ForeignKey(nameof(MasterId))]
public Master Master { get; set; }
public int MasterId { get; set; }
}

Telling EF 6 to Ignore a Private Property

I'm using Entity Framework 6.0.2 to map some simple hand-coded models to an existing database structure. The primary model at the moment is:
public class Occurrence
{
public int ID { get; set; }
public Guid LegacyID { get; set; }
public string Note { get; set; }
public virtual ICollection<OccurrenceHistory> History { get; set; }
}
(The OccurrenceHistory model isn't really relevant to this, but that part is working fine whereby EF loads up the child records for this model.)
The mapping is simple, and I try to be as explicit as I can be (since as the application grows there will be some less-intuitive mapping):
public class OccurrenceMap : EntityTypeConfiguration<Occurrence>
{
public OccurrenceMap()
{
ToTable("Occurrence");
HasKey(o => o.ID);
Property(o => o.ID).IsRequired().HasColumnName("ID");
Property(o => o.LegacyID).IsRequired().HasColumnName("LegacyID");
Property(o => o.Note).IsUnicode().IsOptional().HasColumnName("Note");
}
}
But if I add a private property to the model, EF tries to map it to the database. Something like this:
private OccurrenceHistory CurrentHistory { get; set; }
(Internal to the model I would have some logic for maintaining that field, for other private operations.) When EF generates a SELECT statement it ends up looking for a column called CurrentHistory_ID which of course doesn't exist.
I can make the property public and set the mapping to ignore it:
Ignore(o => o.CurrentHistory);
But I don't want the property to be public. The model is going to internally track some information which the application code shouldn't see.
Is there a way to tell EF to just ignore any and all private members? Even if it's on a per-map basis? I'd particularly like to do this without having to add EF data annotations to the models themselves, since that would not only be a bit of a leaky abstraction (persistence-ignorant models would then have persistence information on them) but it would also mean that the domain core assembly which holds the models would carry a reference to EntityFramework.dll everywhere it goes, which isn't ideal.
A colleague pointed me to a blog post that led to a very practical approach.
So what I have is a private property:
private OccurrenceHistory CurrentHistory { get; set; }
The core of the problem is that I can't use that in my mapping:
Ignore(o => o.CurrentHistory);
Because, clearly, the property is private and can't be accessed in this context. What the blog post suggests is exposing a public static expression which references the private property:
private OccurrenceHistory CurrentHistory { get; set; }
public static readonly Expression<Func<Occurrence, OccurrenceHistory>> CurrentHistoryExpression = o => o.CurrentHistory;
I can then reference that in the mapping:
Ignore(Occurrence.CurrentHistoryExpression);
As with anything, it's a mix of pros and cons. But in this case I think the pros far outweigh the cons.
Pros:
The domain core assembly doesn't need to carry a reference to EntityFramework.dll.
The persistence mapping is entirely encapsulated within the DAL assembly.
Cons:
Models need to expose a little information about their inner workings.
The con breaks encapsulation, but only slightly. Consuming code still can't access that property or its value on instances, it can only see that the property exists statically. Which, really, isn't a big deal, since developers can see it anyway. I feel that the spirit of encapsulation is still preserved on any given instance of the model.

Overriding SaveChanges in Entity Framework 5 Code First to replicate the behavior of an old legacy library

Our company ships a suite of various applications that manipulate data in a database. Each application has its specific business logic, but all applications share a common subset of business rules. The common stuff is incapsulated in a bunch of legacy COM DLLs, written in C++, which use "classic ADO" (they usually call stored procedures, sometimes they use dynamic SQL). Most of these DLLs have XML-based methods (not to mention the proprietary-format-based methods!) to create, edit, delete and retrieve objects, and also extra action such as methods which copy and transform many entities quickly.
The middleware DLLs are now very old, our application developers want a new object-oriented (not xml-oriented) middleware that can be easily used by C# applications.
Many people in the company say that we should forget old paradigms and move to new cool stuff such Entity Framework. They are intrigued by the simplicity of POCOs and they would like to use LINQ to retrieve data (The Xml-based query methods of the DLLs are not so easy to use and will never be as flexible as LINQ).
So I'm trying to create a mock-up for a simplified scenario (the real scenario is much more complex, and here I'll post just a simplified subset of the simplified scenario!). I'm using Visual Studio 2010, Entity Framework 5 Code First, SQL Server 2008 R2.
Please have mercy if I make stupid mistakes, I'm a newby to Entity Framework.
Since I have many different doubts, I'll post them in separate threads.
This is the first one. Legacy XML methods have a signature like this:
bool Edit(string xmlstring, out string errorMessage)
With a format like this:
<ORDER>
<ID>234</ID>
<NAME>SuperFastCar</NAME>
<QUANTITY>3</QUANTITY>
<LABEL>abc</LABEL>
</ORDER>
The Edit method implemented the following business logic: when a Quantity is changed, an "automatic scaling" must be applied to all Orders which have the same Label.
E.g. there are three orders: OrderA has quantity = 3, label = X. OrderB has quantity = 4, label = X. OrderC has quantity = 5, label = Y.
I call the Edit method supplying a new quantity = 6 for OrderA, i.e. I'm doubling OrderA's quantity. Then, according to the business logic, OrderB's quantity must be automatically doubled, and must become 8, because OrderB and OrderA have the same label. OrderC must not be changed because it has a different label.
How can I replicate this with POCO classes and Entity Framework? It's a problem because the old Edit method can change only one order at a time, while
Entity Framework can change a lot of Orders when SaveChanges is called. Furthermore, a single call to SaveChanges can also create new Orders.
Temporary assumptions, just for this test: 1) if many Order Quantities are changed at the same time, and the scaling factor is not the same for all of them, NO scaling occurs; 2) newly added Orders are not automatically scaled even if they have the same label of a scaled order.
I tried to implement it by overriding SaveChanges.
POCO class:
using System;
namespace MockOrders
{
public class Order
{
public Int64 Id { get; set; }
public string Name { get; set; }
public string Label { get; set; }
public decimal Quantity { get; set; }
}
}
Migration file (to create indexes):
namespace MockOrders.Migrations
{
using System;
using System.Data.Entity.Migrations;
public partial class UniqueIndexes : DbMigration
{
public override void Up()
{
CreateIndex("dbo.Orders", "Name", true /* unique */, "myIndex1_Order_Name_Unique");
CreateIndex("dbo.Orders", "Label", false /* NOT unique */, "myIndex2_Order_Label");
}
public override void Down()
{
DropIndex("dbo.Orders", "myIndex2_Order_Label");
DropIndex("dbo.Orders", "myIndex1_Order_Name_Unique");
}
}
}
DbContext:
using System;
using System.Data.Entity;
using System.Data.Entity.ModelConfiguration;
using System.Linq;
namespace MockOrders
{
public class MyContext : DbContext
{
public MyContext() : base(GenerateConnection())
{
}
private static string GenerateConnection()
{
var sqlBuilder = new System.Data.SqlClient.SqlConnectionStringBuilder();
sqlBuilder.DataSource = #"localhost\aaaaaa";
sqlBuilder.InitialCatalog = "aaaaaa";
sqlBuilder.UserID = "aaaaa";
sqlBuilder.Password = "aaaaaaaaa!";
return sqlBuilder.ToString();
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new OrderConfig());
}
public override int SaveChanges()
{
ChangeTracker.DetectChanges();
var groupByLabel = from changedEntity in ChangeTracker.Entries<Order>()
where changedEntity.State == System.Data.EntityState.Modified
&& changedEntity.Property(o => o.Quantity).IsModified
&& changedEntity.Property(o => o.Quantity).OriginalValue != 0
&& !String.IsNullOrEmpty(changedEntity.Property(o => o.Label).CurrentValue)
group changedEntity by changedEntity.Property(o => o.Label).CurrentValue into x
select new { Label = x.Key, List = x};
foreach (var labeledGroup in groupByLabel)
{
var withScalingFactor = from changedEntity in labeledGroup.List
select new
{
ChangedEntity = changedEntity,
ScalingFactor = changedEntity.Property(o => o.Quantity).CurrentValue / changedEntity.Property(o => o.Quantity).OriginalValue
};
var groupByScalingFactor = from t in withScalingFactor
group t by t.ScalingFactor into g select g;
// if there are too many scaling factors for this label, skip automatic scaling
if (groupByScalingFactor.Count() == 1)
{
decimal scalingFactor = groupByScalingFactor.First().Key;
if (scalingFactor != 1)
{
var query = from oo in this.AllTheOrders where oo.Label == labeledGroup.Label select oo;
foreach (Order ord in query)
{
if (this.Entry(ord).State != System.Data.EntityState.Modified
&& this.Entry(ord).State != System.Data.EntityState.Added)
{
ord.Quantity = ord.Quantity * scalingFactor;
}
}
}
}
}
return base.SaveChanges();
}
public DbSet<Order> AllTheOrders { get; set; }
}
class OrderConfig : EntityTypeConfiguration<Order>
{
public OrderConfig()
{
Property(o => o.Name).HasMaxLength(200).IsRequired();
Property(o => o.Label).HasMaxLength(400);
}
}
}
It seems to work (barring bugs of course), but this was an example with just 1 class: a real production application may have hundreds of classes!
I'm afraid that in a real scenario, with a lot of constraints and business logic, the override of SaveChanges could quickly become long, cluttered and error-prone.
Some colleagues are also concerned about performance. In our legacy DLLs, a lot of business logic (such as "automatic" actions) lives in stored procedures, some colleagues are worried that the SaveChanges-based approach may introduce too many round-trips and hinder performance.
In the override of SaveChanges we could also invoke stored procedures, but what about transactional integrity? What if I make changes to the database
before I call "base.SaveChanges()", and "base.SaveChanges()" fails?
Is there a different approach? Am I missing something?
Thank you very much!
Demetrio
p.s. By the way, is there a difference between overriding SaveChanges and registering to "SavingChanges" event? I read this document but it does not explain whether there's a difference:
http://msdn.microsoft.com/en-us/library/cc716714(v=vs.100).aspx
This post:
Entity Framework SaveChanges - Customize Behavior?
says that "when overriding SaveChanges you can put custom logic before and AFTER calling base.SaveChanges". But are there other caveats/advantages/drawbacks?
I'd say this logic belongs either in your MockOrders.Order class, in a class from a higher layer which uses your Order class (e.g. BusinessLogic.Order) or in a Label class. Sounds like your label acts as a joining attribute so, without knowing the particulars, I'd say pull it out and make it an entity of its own, this will give you navigation properties so you can more naturally access all Orders with the same label.
If modifying the DB to normalise out Labels is not a goer, build a view and bring that into your entity model for this purpose.
I've had to do something similar, but I've created an IPrepForSave interface, and implemented that interface for any entities that need to do some business logic before they're saved.
The interface (pardon the VB.NET):
Public Interface IPrepForSave
Sub PrepForSave()
End Interface
The dbContext.SaveChanges override:
Public Overloads Overrides Function SaveChanges() As Integer
ChangeTracker.DetectChanges()
'** Any entities that implement IPrepForSave should have their PrepForSave method called before saving.
Dim changedEntitiesToPrep = From br In ChangeTracker.Entries(Of IPrepForSave)()
Where br.State = EntityState.Added OrElse br.State = EntityState.Modified
Select br.Entity
For Each br In changedEntitiesToPrep
br.PrepForSave()
Next
Return MyBase.SaveChanges()
End Function
And then I can keep the business logic in the Entity itself, in the implemented PrepForSave() method:
Partial Public Class MyEntity
Implements IPrepForSave
Public Sub PrepForSave() Implements IPrepForSave.PrepForSave
'Do Stuff Here...
End Sub
End Class
Note that I place some restrictions on what can be done in the PrepForSave() method:
Any changes to the entity cannot make the entity validation logic fail, because this will be called after the validation logic is called.
Database access should be kept to a minimum, and should be read-only.
Any entities that don't need to do business logic before saving should not implement this interface.

How to use Fluent NHibernate Validator when using auto mapping?

I've just modelled a small database using Fluent nHibernate and the auto mapping feature. Now I'm wondering how I work with validation. In the past I've decorated my classes with attributes but the purpose of this by-convention automapping is to keep things clean.
I do have a couple override files which look like this:
public class EventMappingOverride : IAutoMappingOverride<Event>
{
public void Override(AutoMapping<Event> mapping)
{
mapping.Map(x => x.EventType, "TypeID").CustomType(typeof(EventType));
mapping.Map(x => x.EventStatus, "StatusID").CustomType(typeof(EventStatus));
mapping.HasMany(x => x.EventDates).KeyColumn("EventID");
}
}
Is this where I would put my validation rules? If so, what does that look like and is there really even a point to using the auto mapping (if my override files are going to be elaborate anyway)?
Thanks.
To clarify further:
My entities look like this as of now:
namespace Business.Data
{
public class Event
{
public virtual int Id { get; set; }
public virtual string Title { get; set; }
public virtual EventStatus EventStatus { get; set; }
public virtual EventType EventType { get; set; }
public virtual IList<EventDate> EventDates { get; set; }
}
}
I would like to keep them looking that like. Just plain objects so in the future we can potentially switch out or upgrade the ORM and still have these nice clean objects.
However, when it comes to using nHibernate Validator (part of NHContrib) I'm not sure how to incorporate it without littering the properties with attributes. I guess this is more of a question architecture. I could use a different validation framework as well but I want it to be tied in with nHibernate so that it won't insert/update invalid records. Any opinions appreciated!
My opinion is :
Validation is part of the business at it depend from it and then the database scale to this need. So if you need a email string column in your db you should not rely on a db framework to do that especially as you said that may be later you will switch ORM then you will loose your work.
Keep validation in the business/high layer, and leave the db do simple query/insertion, remember NHibernate is already a bit complicate to hand on so keep it simple.
To answer your question, if you don't want to littering your entities use the xml validation as describe here.
http://nhforge.org/wikis/validator/nhibernate-validator-1-0-0-documentation.aspx

Entity Framework - Multiple Project support

I am looking into migrate a large project to Entity Framework 4.0 but am not sure if it can handle my inheritance scenario.
I have several projects that inherit from an object in the “main” project. Here is a sample base class:
namespace People
{
public class Person
{
public int age { get; set; }
public String firstName { get; set; }
public String lastName { get; set; }
}
}
and one of the sub-classes:
namespace People.LawEnforcement
{
public class PoliceOfficer : People.Person
{
public string badgeNumber { get; set; }
public string precinct { get; set; }
}
}
And this is what the project layout looks like:
People - People.Education - People.LawEnforcement http://img51.imageshack.us/img51/7293/efdemo.png
Some customers of the application will use classes from the People.LawEnforcement and other users will use People.Education and some will use both. I only ship the assembles that the users will need. So the Assembles act somewhat like plug-ins in that they add features to the core app.
Is there anyway in Entity Framework to support this scenario?
Based on this SO question I'm think something like this might work:
ctx.MetadataWorkspace.LoadFromAssembly(typeof(PoliceOfficer).Assembly);
But even if that works then it seams as if my EDMX file will need to know about all the projects. I would rather have each project contain the metadata for the classes in that project but I'm not sure if that is possible.
If this isn't possible with entity framework is there another solution (NHibernate, Active Record, etc.) that would work?
Yes this is possible, using the LoadFromAssembly(..) method you've already found.
... but it will only work if you have an specialized model (i.e. EDMX) for each distinct type of client application.
This is because EF (and most other ORMs) require a class for each entity in the model, so if some clients don't know about some classes, you will need a model without the corresponding entities -- i.e. a customized EDMX for each scenario.
To make it easier to create a new model for each client application, if I was you I'd use Code-Only following the best practices laid out on my blog, to make it easy to grab only the fragments of the model you need actually need.
Hope this helps
Alex
Alex is correct (+1), but I'd strongly urge you to reconsider your model. In the real world, a police officer is not a subtype of a person. Rather, it's an attribute of that person's employment. I think programmers frequently tend to over-emphasize inheritance at the expense of composition in object oriented design, but it's especially problematic in O/R mapping. Remember that an object instance can only ever have one type. When that object is stored in the database, the instance can only have that type for as long as it exists, across multiple application sessions. What if a person had two jobs, as a police officer and a teacher? Perhaps that scenario is unlikely, but the general problem is more common than you might expect.
More relevant to your question, I think you can solve your actual problem at hand by making your mapped entity model more generic, and your application-specific data projections on the entities rather than entities themselves. Consider entities like:
public class JobType
{
public Guid Id { get; set; }
// ...
}
public class Job
{
public JobType JobType { get; set; }
public string EmployeeNumber { get; set; }
}
public class Person
{
public EntityCollection<Job> Jobs { get; set; }
}
Now your law enforcement app can do:
var po = from p in Context.People
let poJob = p.Jobs.Where(j => j.JobType == JobType.PoliceOfficerId).FirstOrDefault()
where poJob != null
select new PoliceOfficer
{
Id = p.Id,
BadgeNumber = poJob.EmployeeNumber
};
Where PoliceOfficer is just a POCO, not a mapped entity of any kind.
And with that you've achieved your goal of having a common data model, but having the "job type specific" elements in separate projects.

Categories