I am looking into migrate a large project to Entity Framework 4.0 but am not sure if it can handle my inheritance scenario.
I have several projects that inherit from an object in the “main” project. Here is a sample base class:
namespace People
{
public class Person
{
public int age { get; set; }
public String firstName { get; set; }
public String lastName { get; set; }
}
}
and one of the sub-classes:
namespace People.LawEnforcement
{
public class PoliceOfficer : People.Person
{
public string badgeNumber { get; set; }
public string precinct { get; set; }
}
}
And this is what the project layout looks like:
People - People.Education - People.LawEnforcement http://img51.imageshack.us/img51/7293/efdemo.png
Some customers of the application will use classes from the People.LawEnforcement and other users will use People.Education and some will use both. I only ship the assembles that the users will need. So the Assembles act somewhat like plug-ins in that they add features to the core app.
Is there anyway in Entity Framework to support this scenario?
Based on this SO question I'm think something like this might work:
ctx.MetadataWorkspace.LoadFromAssembly(typeof(PoliceOfficer).Assembly);
But even if that works then it seams as if my EDMX file will need to know about all the projects. I would rather have each project contain the metadata for the classes in that project but I'm not sure if that is possible.
If this isn't possible with entity framework is there another solution (NHibernate, Active Record, etc.) that would work?
Yes this is possible, using the LoadFromAssembly(..) method you've already found.
... but it will only work if you have an specialized model (i.e. EDMX) for each distinct type of client application.
This is because EF (and most other ORMs) require a class for each entity in the model, so if some clients don't know about some classes, you will need a model without the corresponding entities -- i.e. a customized EDMX for each scenario.
To make it easier to create a new model for each client application, if I was you I'd use Code-Only following the best practices laid out on my blog, to make it easy to grab only the fragments of the model you need actually need.
Hope this helps
Alex
Alex is correct (+1), but I'd strongly urge you to reconsider your model. In the real world, a police officer is not a subtype of a person. Rather, it's an attribute of that person's employment. I think programmers frequently tend to over-emphasize inheritance at the expense of composition in object oriented design, but it's especially problematic in O/R mapping. Remember that an object instance can only ever have one type. When that object is stored in the database, the instance can only have that type for as long as it exists, across multiple application sessions. What if a person had two jobs, as a police officer and a teacher? Perhaps that scenario is unlikely, but the general problem is more common than you might expect.
More relevant to your question, I think you can solve your actual problem at hand by making your mapped entity model more generic, and your application-specific data projections on the entities rather than entities themselves. Consider entities like:
public class JobType
{
public Guid Id { get; set; }
// ...
}
public class Job
{
public JobType JobType { get; set; }
public string EmployeeNumber { get; set; }
}
public class Person
{
public EntityCollection<Job> Jobs { get; set; }
}
Now your law enforcement app can do:
var po = from p in Context.People
let poJob = p.Jobs.Where(j => j.JobType == JobType.PoliceOfficerId).FirstOrDefault()
where poJob != null
select new PoliceOfficer
{
Id = p.Id,
BadgeNumber = poJob.EmployeeNumber
};
Where PoliceOfficer is just a POCO, not a mapped entity of any kind.
And with that you've achieved your goal of having a common data model, but having the "job type specific" elements in separate projects.
Related
I`m in process of learning C# & .NET and EF (with aspnetboilerplate) and I came up with idea to create some dummy project so I can practice. But last 4 hour Im stuck with this error and hope someone here can help me.
What I create( well at least I think I create it correctly ) is 2 class called "Ingredient" and "Master"
I want to use it for categorize Ingredient with "Master" class.
For example ingredient like
Chicken breast
chicken drumstick
Both of them belong to Meat ( witch is input in "Master" database ) and here is my code
Ingredient.cs
public class Ingrident : Entity
{
public string Name { get; set; }
public Master Master { get; set; }
public int MasterId { get; set; }
}
Master.cs
public class Master : Entity
{
public string Name { get; set; }
public List<Ingrident> Ingridents { get; set; } = new();
}
IngridientAppService.cs
public List<IngridientDto> GetIngWithParent()
{
var result = _ingRepository.GetAllIncluding(x => x.Master);
//Also I try this but doesn`t work
// var result = _ingRepository.GetAll().Where(x => x.MasterId == x.Master.Id);
return ObjectMapper.Map<List<IngridientDto>>(result);
}
IngridientDto.cs
[AutoMap(typeof(IndexIngrident.Entities.Ingrident))]
public class IngridientDto : EntityDto
{
public string Name { get; set; }
public List<MasterDto> Master { get; set; }
public int MasterId { get; set; }
}
MasterDto.cs
[AutoMap(typeof(IndexIngrident.Entities.Master))]
public class MasterDto : EntityDto
{
public string Name { get; set; }
}
When I created ( for last practice ) M -> M relationship this approach with .getAllIncluding work but now when I have One -> Many it won`t work.
Hope someone will be able to help me or at least give me some good hint.
Have a nice day !
Straight up the examples you are probably referring to (regarding the repository etc.) are overcomplicated and for most cases, not what you'd want to implement.
The first issue I see is that while your entities are set up for a 1-to-many relationship from Master to Ingredients, your DTOs are set up from Ingredient to Masters which definitely won't map properly.
Start with the simplest thing. Get rid of the Repository and get rid of the DTOs. I'm not sure what the base class "Entity" does, but I'm guessing it exposes a common key property called "Id". For starters I'd probably ditch that as well. When it comes to primary keys there are typically two naming approaches, every table uses a PK called "Id", or each table uses a PK with the TableName suffixed with "Id". I.e. "Id" vs. "IngredientId". Personally I find the second option makes it very clear when pairing FKs and PKs given they'd have the same name.
When it comes to representing relationships through navigation properties one important detail is ensuring navigation properties are linked to their respective FK properties if present, or better, use shadow properties for the FKs.
For example with your Ingredient table, getting rid of the Entity base class:
[Table("Ingredients")]
public class Ingredient : Entity
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int IngredientId { get; set; }
public string Name { get; set; }
public int MasterId { get; set; }
[ForeignKey("MasterId")]
public virtual Master Master { get; set; }
}
This example uses EF attributes to aid in telling EF how to resolve the entity properties to respective tables and columns, as well as the relationship between Ingredient and Master. EF can work much of this out by convention, but it's good to understand and apply it explicitly because eventually you will come across situations where convention doesn't work as you expect.
Identifying the (Primary)Key and indicating it is an Identity column also tells EF to expect that the database will populate the PK automatically. (Highly recommended)
On the Master side we do something similar:
[Table("Masters")]
public class Master : Entity
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int MasterId { get; set; }
public string Name { get; set; }
[InverseProperty("Master")]
public virtual ICollection<Ingredient> Ingredients { get; set; } = new List<Ingredient>();
}
Again we denote the Primary Key, and for our Ingredients collection, we tell EF what property on the other side (Ingredient) it should use to associate to this Master's list of Ingredients using the InverseProperty attribute.
Attributes are just one option to set up the relationships etc. The other options are to use configuration classes that implement IEntityConfiguration<TEntity> (EF Core), or to configure them as part of the OnModelCreating event in the DbContext. That last option I would only recommend for very small projects as it can start to become a bit of a God method quickly. You can split it up into calls to various private methods, but you may as well just use IEntityConfiguration classes then.
Now when you go to fetch Ingredients with it's Master, or a Master with its Ingredients:
using (var context = new AppDbContext())
{
var ingredients = context.Ingredients
.Include(x => x.Master)
.Where(x => x.Master.Name.Contains("chicken"))
.ToList();
// or
var masters = context.Master
.Include(x => x.Ingredients)
.Where(x => x.Name.Contains("chicken"))
.ToList();
// ...
}
Repository patterns are a more advanced concept that have a few good reasons to implement, but for the most part they are not necessary and an anti-pattern within EF implementations. I consider Generic repositories to always be an anti-pattern for EF implementations. I.e. Repository<Ingredient> The main reason not to use repositories, especially Generic repositories with EF is that you are automatically increasing the complexity of your implementation and/or crippling the capabilities that EF can bring to your solution. As you see from working with your example, simply getting across an eager load through to the repository means writing in complex Expression<Func<TEntity>> parameters, and that just covers eager loading. Supporting projection, pagination, sorting, etc. adds even more boiler-plate complexity or limits your solution and performance without these capabilities that EF can provide out of the box.
Some good reasons to consider studying up on repository implementations /w EF:
Facilitate unit testing. (Repositories are easier to mock than DbContexts/DbSets)
Centralizing low-level data rules such as tenancy, soft deletes, and authorization.
Some bad (albeit very common) reasons to consider repositories:
Abstracting code from references or knowledge of the dependency on EF.
Abstracting the code so that EF could be substituted out.
Projecting to DTOs or ViewModels is an important aspect to building efficient and secure solutions with EF. It's not clear what "ObjectMapper" is, whether it is an Automapper Mapper instance or something else. I would highly recommend starting to grasp projection by using Linq's Select syntax to fill in a desired DTO from the models. The first key difference when using Projection properly is that when you project an object graph, you do not need to worry about eager loading related entities. Any related entity / property referenced in your projection (Select) will automatically be loaded as necessary. Later, if you want to leverage a tool like Automapper to help remove the clutter of Select statements, you will want to configure your mapping configuration then use Automapper's ProjectTo method rather than Map. ProjectTo works with EF's IQueryable implementation to resolve your mapping down to the SQL just like Select does, where Map would need to return everything eager loaded in order to populate related data. ProjectTo and Select can result in more efficient queries that can better take advantage of indexing than Eager Loading entire object graphs. (Less data over the wire between database and server/app) Map is still very useful such as scenarios where you want to copy values back from a DTO into a loaded entity.
Do it like this
public class Ingrident:Entity
{
public string Name { get; set; }
[ForeignKey(nameof(MasterId))]
public Master Master { get; set; }
public int MasterId { get; set; }
}
I have some kind of misunderstanding of working with entities in asp.net mvc concept.
I am pretty new to asp.net mvc and when I was studying I was told that whenever I work with databases I have to create a Model which will be a copy of generated by EF. And sending to views and all the calculations are done with that model..
For example if I have entity Person with something like:
public int EmployeeId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public Nullable<System.DateTime> DateOfBirth { get; set; }
I have to generate class (EmployeeViewModel) in my Model folder:
public class EmployeeViewModel
{
public int EmployeeId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public Nullable<System.DateTime> DateOfBirth { get; set; }
}
and in my controller I was usually doing something like this to get data about one Employee from database to my model (something similar if list of employees):
var Employee = db.Employees.Where(item => item.EmployeeId == someId).Select(item => new EmployeeViewModel
{
EmployeeId = item.EmployeeId ,
FirstName = item.FirstName ,
LastName = item.LastName ,
DateOfBirth = item.DateOfBirth
}).FirstOrDefault();
This code works, but the concept of working with this custom model, which is just a copy of an entity seems to be strange.. I understand that this concept of creating custom models is useful if our custom model differs from the entity.
So do I have to do it anyway, or I can work straight with entities in some cases. I would be glad if you could recommend some articles or something else to get the idea, how do I have to work with databases in my mvc projects, so as it would be correct in sense of its original concept.
You don't have to create copies of all of your models. In fact, if they are nothing more than carbon copies which don't add any value of any kind, then you really shouldn't be making them.
Just about any C# class can be the model that is bound to the view. If your Entity Framework model fits the business need of the view, then there's little-to-no value in adding a translation layer between the two.
There is often personal preference in which the developer wants to keep the database models and the application models separate. The objective need for this separation depends on the architecture of the system being built. (No details of which are described in this question.)
Conversely, there's equally an argument to be made of a single set of dependency-free business models which are used throughout every layer of the application. This side is often proposed by asking the question... If your use case (what the user does in the view) doesn't line up with your system architecture, shouldn't the latter be changed to better facilitate the former?
In short, you certainly can create a hard line of separation between different classifications of models for different layers of the application. But whether or not you should is a larger matter. If in your case doing so creates additional work and additional complexity without any additional value, then that seems to indicate that it's not necessary.
I'm using AutoMapper to map a lot of Entity models to View Model that I use in my controllers and views (.Net MVC)
There is a lot of relations in the DB and so our VM have a lot of childs (who have childs, and so and so)
public class InvoiceVMFull : VMBase
{
public int Id { get; set; }
public InvoiceType InvoiceType { get; set; }
public string Reference { get; set; }
//.... shortened code for readability
// list all entity fields
public List<string> InvoiceMainAddress { get; set; }
public List<string> InvoiceDlvAddress { get; set; }
}
It works just fine, but is very slow and always load from the DB all relations whereas I usually need only a few datas...
So I created some light VM that I want to use for the majority of our pages.
public class InvoiceVMLite : VMBase
{
public int Id { get; set; }
public string Reference { get; set; }
//.... shortened code for readability
// list only some of the entity fields (most used)
public StoredFileVM InvoiceFile { get; set; }
}
The problem is I can't find how :
to map one Entity object to the two VMs and how to choose the right one (to load from DB) using the context (the page or event called)
to map two VMs to one entity and save (on the DB) only the fields that are present in the VM used and don't erase the absent ones
I tried to create the mapping both VM :
Mapper.CreateMap<Invoice, InvoiceVMLite>();
Mapper.CreateMap<Invoice, InvoiceVMFull>();
But when I try to call the mapping for Lite, it doesn't exist (have been overridden by Full) :
Mapper.Map(invoice, InvoiceEntity, InvoiceVMLite)
Correct Use of Map function
It looks like you are calling map incorrectly. Try these instead
var vmLite = Mapper.Map<Invoice, InvoiceVMLite>(invoice);
var vmFull = Mapper.Map<Invoice, InvoiceVMFull>(invoice);
var vmLite = Mapper.Map(invoice); // would work if it were not ambiguous what the destination was based on the input.
Entity to two view models
You would usually create two mappings, one for each view model from the one entity. I'd suggest the cleanest is to have two separate views (separate Actions in a controller) for each view model. This may involve a quick redirect after you've decided on context which one to use.
View models to entity
Automapper is not meant for mapping from view models to Entities for many reasons, including the challenge you'd face. Instead you would pass specific parameters. The author of Automapper, Jimmy Bogard, wrote a good article on why this is the case.
I couldnt manage to do that with AutoMapper, and so I created my own convert methods (Entity <=> VM) with a lot of reflexivity, and with specific cases handled in each of the VM classes.
Now I can easily get a full or lite VM from an Entity, and also specify the depth in relation I want to go. So it's A LOT faster and more adaptable than AutoMapper
And I can save a VM to an entity (only saving modified fields if I want) that I create or that i got from base. So it's A LOT faster and adaptable than AutoMapper
In conclusion : Don't use autoMapper, it seem easy but create so many performance issues that it isn't worth it
We are building a web app using AngularJS , C# , ASP.Net Web API and Fluent NHibernate.
We have decided to use DTOs to transfer data to the presentation layer ( angular views).
I had a few doubts regarding the general structuring and naming of DTOs.
Here's an example to illustrate my scenario.
Lets say I have a domain entity called Customer which looks like:
public class Customer
{
public virtual int Id { get; set; }
public virtual string Name { get; set; }
public virtual Address Address { get; set; }
public virtual ICollection<Account> Accounts { get; set; }
}
Now, in my views/presentation layer I need to retrieve different flavors of Customer like :
1) Just Id and Name
2) Id , Name and Address
3) Id , Name , Address and Accounts
I have created a set of DTOs to accomplish this :
public class CustomerEntry
{
public int Id { get; set; }
public string Name { get; set; }
}
public class CustomerWithAddress : CustomerEntry
{
public AddressDetails Address { get; set; }
}
public class CustomerWithAddressAndAccounts : CustomerWithAddress
{
public ICollection<AccountDetails> Accounts { get; set; }
}
AddressDetails and AccountDetails are DTOs which have all the properties of their corresponding Domain entities.
This works fine for querying and data retrievals ; the question is what do I use for inserts and updates. During creation of a new customer record , name and address are mandatory and accounts are optional ..so in other words I need an object with all the customer properties. Hence the confusion :
1) What do I use for insert and updates?
The CustomerWithAddressAndAccounts DTO has everything in it but its name seems a bit awkward to be used for insert/updates.
2) Do I create another DTO .. if I do , wouldn't that be duplication as the new DTO will exactly be like CustomerWithAddressAndAccounts ?
3) Last but not least , does the DTO inheritance strcuture described above seem like a good fit for the requirement ? Are there any other ways to model this ?
I have gone through other posts on this topic but couldn't make much headway.
One thing that I did pickup was to avoid using the suffix "DTO" in the class names.
I think it feels a bit superfluous.
Would love to hear your thoughts
Thanks
Recommendation is that you should just have one DTO class for each entity suffixed with DTO e.g. CustomerEntryDTO for the Customer entity (but you can certainly use inheritance hierarchies as per choice and requirements).
Moreover, Add a abstract DTOBase kind of base class or an interface; and do not use such deep inheritance heirarchies for each Address, Account and other properties to be included in child DTOs. Rather, include these properties in the same CustomerEntryDTO class (if possible) as below:
[Serializable]
public class CustomerEntryDTO : DTOBase, IAddressDetails, IAccountDetails
{
public int Id { get; set; }
public string Name { get; set; }
public AddressDetails Address { get; set; } //Can remain null for some Customers
public ICollection<AccountDetails> Accounts { get; set; } //Can remain null for some Customemer
}
Moreover, your DTOs should be serializable to be passed across process boundaries.
For more on the DTO pattern, refer below articles:
Data Transfer Object
MSDN
Edit:
In case you don't want to send certain properties over the wire (I know you would need to that conditionally so would need to explore more on this), you can exclude them from the Serialization mechanism by using attributes such as NonSerialized (but it works only on fields and not properties, see workaround article for using with properties: NonSerialized on property).
You can also create your own custom attribute such as ExcludeFromSerializationAttribute and apply it to properties you don't want to send every time over wire based on certain rules/conditions. Also see: Conditional xml serialization
Edit 2:
Use interfaces for separating the different properties in the one CustomerEntryDTO class. See the Interface Segregation Principle on Google or MSDN. I will try to put a sample explanation later.
What do I use for insert and updates?
Service operations are usually defined in very close relation to business operations. Business language doesn't speak in terms of "inserts" and "updates", neither do services.
Customer management service is likely to have some Register operation that takes customer name and maybe some other optional parameters.
Do I create another DTO?
Yes, you should create another DTO.
Sometimes service operation contract may be enough and there is no need to define a separate DTO for a particular operation:
function Register(UserName as String, Address as Maybe(of String)) as Response
But most of the time it is better to define a separate DTO class even for only a single service operation:
class RegisterCommand
public UserName as String
public Address as Maybe(of String)
end class
function Register(Command as RegisterCommand) as Response
RegisterCommand DTO may look very similar to CustomerWithAddress DTO because it has the same fields but in fact these 2 DTOs have very different meanings and do not substitute each other.
For example, CustomerWithAddress contains AddressDetails, while a simple String address representation may be enough to register a customer.
Using a separate DTO for each service operation takes more time to write but easier to maintain.
As of your item 1, for inserts and updates it's better to use Command pattern. According to CQRS, you don't need DTOs. Consider this schema:
via blogs.msdn.com
Assume the following simple POCOs, Country and State:
public partial class Country
{
public Country()
{
States = new List<State>();
}
public virtual int CountryId { get; set; }
public virtual string Name { get; set; }
public virtual string CountryCode { get; set; }
public virtual ICollection<State> States { get; set; }
}
public partial class State
{
public virtual int StateId { get; set; }
public virtual int CountryId { get; set; }
public virtual Country Country { get; set; }
public virtual string Name { get; set; }
public virtual string Abbreviation { get; set; }
}
Now assume I have a simple respository that looks something like this:
public partial class CountryRepository : IDisposable
{
protected internal IDatabase _db;
public CountryRepository()
{
_db = new Database(System.Configuration.ConfigurationManager.AppSettings["DbConnName"]);
}
public IEnumerable<Country> GetAll()
{
return _db.Query<Country>("SELECT * FROM Countries ORDER BY Name", null);
}
public Country Get(object id)
{
return _db.SingleById(id);
}
public void Add(Country c)
{
_db.Insert(c);
}
/* ...And So On... */
}
Typically in my UI I do not display all of the children (states), but I do display an aggregate count. So my country list view model might look like this:
public partial class CountryListVM
{
[Key]
public int CountryId { get; set; }
public string Name { get; set; }
public string CountryCode { get; set; }
public int StateCount { get; set; }
}
When I'm using the underlying data provider (Entity Framework, NHibernate, PetaPoco, etc) directly in my UI layer, I can easily do something like this:
IList<CountryListVM> list = db.Countries
.OrderBy(c => c.Name)
.Select(c => new CountryListVM() {
CountryId = c.CountryId,
Name = c.Name,
CountryCode = c.CountryCode,
StateCount = c.States.Count
})
.ToList();
But when I'm using a repository or service pattern, I abstract away direct access to the data layer. It seems as though my options are to:
Return the Country with a populated States collection, then map over in the UI layer. The downside to this approach is that I'm returning a lot more data than is actually needed.
-or-
Put all my view models into my Common dll library (as opposed to having them in the Models directory in my MVC app) and expand my repository to return specific view models instead of just the domain pocos. The downside to this approach is that I'm leaking UI specific stuff (MVC data validation annotations) into my previously clean POCOs.
-or-
Are there other options?
How are you handling these types of things?
It really depends on the projects architecture for what we do. Usually though.. we have services above the repositories that handle this logic for you. The service decides what repositories to use to load what data. The flow is UI -> Controller -> Service -> Repositories -> DB. The UI and/or Controllers have no knowledge of the repositories or their implementation.
Also, StateCount = c.States.Count would no doubt populate the States list anyway.. wouldn't it? I'm pretty sure it will in NHibernate (with LazyLoading causing an extra select to be sent to the DB).
One option is to separate your queries from your existing infrastructure entirely. This would be an implementation of a CQRS design. In this case, you can issue a query directly to the database using a "Thin Read Layer", bypassing your domain objects. Your existing objects and ORM are actually getting in your way, and CQRS allows you to have a "command side" that is separate and possibly a totally different set of tech to your "query side", where each is designed to do it's own job without being compromised by the requirements of the other.
Yes, I'm quite literally suggesting leaving your existing architecture alone, and perhaps using something like Dapper to do this (beware of untested code sample) directly from your MVC controllers, for example:
int count =
connection.Query<int>(
"select count(*) from state where countryid = #countryid",
new { countryid = 123 } );
Honestly, your question has gave me a food for thought for a couple of days. More and more I tend to think that denormalization is the correct solution.
Look, the main point of domain driven design is to let the problem domain drive your modeling decisions. Consider the country entity in the real world. A country has a list of states. However, when you want to know how many states a certain country has, you are not going over the list of the states in the encyclopedia and count them. You are more likely to look at the country's statistics and check the number of states there.
IMHO, the same behavior should be reflected in your domain model. You can have this information in the country's property, or introduce a kind of CountryStatistics object. Whatever approach you choose, it must be a part of the country aggregate. Being in the consistency boundary of the aggregate will ensure that it holds a consistent data in case of adding or removing a state.
Some other approaches:
If the states collection is not expected to change a lot, you can
allow a bit of denormalization - add "NumberOfStates" property to the
Country object. It will optimise the query, but you'll have to make
sure the extra field holds the correct information.
If you are using NHibernate, you can use ExtraLazyLoading - it will
issue another select, but won't populate the whole collection when
Count is called. More info here:
nHibernate Collection Count