Nhibernate: Multiple classmaps for one entityclass - c#

So, I'm writing a reusable library. And Nhibernate mapping by code is used for ORM operations. There will be multiple services making use of this library so I want the library to behave as dynamically as possible.
There will be multiple services and for every service there will be specific tables found in the database, these are prefixed by their service name. Unity will inject this prefix and that all works nice and dandy when using only one service.
But now I'm at the point where I have to write a service that will read and combine from multiple services. So this libdummy item will have to be mapped multiple times with different table prefixes.
public class LibDummy
{
public virtual int Id { get; set; }
public virtual string Guid { get; set; }
}
public class LibDummyMapping : ClassMapping<LibDummy>
{
public LibDummyMapping(ServiceName service)
{
Table($"{service.Name}_LibDummy");
Id(o => o.Id, m => m.Column("Id"));
Property(o => o.Guid, m => m.Column("Guid"));
}
}
I tried doing it like this:
public class FirstLibDummyMapping : LibDummyMapping
{
public FirstLibDummyMapping (ServiceName service) : base(service)
{
}
}
public class SecondLibDummyMapping : LibDummyMapping
{
public SecondLibDummyMapping (ServiceName service) : base(service)
{
}
}
But this will throw a "Duplicate class/entity mapping" error.
And with 2 different classmaps for the same entity it will throw a Collection already mapped error.
Ideally I would have one dynamic classmap that can just be natively used in the library, but that's not an option i guess??
Any ideas for this, or is this something that is just not going to work?

Any NHibernate guru's with a definitive answer?

Related

User details within the DbContext model builder

I'm new to razor pages / efcore / aspnet identity and have been trying to figure this out but its beating me.
Basically, I use AspNet Identity for user authentication & authorisation. I've extended AspNetUsers with an additional OrganisationId, which is an FK to Organisation entity; and added the ID as a claim in the identity claim store. This works fine.
Now I need to set an efcore global filter based on the authenticated user's organisationId so they can only view data that is assigned to their organisation.
However, I can't access the authenticated user details within the ModelBuilder.
public class SDMOxContext : IdentityDbContext<
ApplicationUser, ApplicationRole, string,
ApplicationUserClaim, ApplicationUserRole, ApplicationUserLogin,
ApplicationRoleClaim, ApplicationUserToken>
{
public SDMOxContext(DbContextOptions<SDMOxContext> options)
: base(options)
{ }
protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
// Set global filter so users can only see projects within their organisation.
builder.Entity<Project>().HasQueryFilter(project => project.OrganisationId == 1);
}
Instead of 1 in the global filter, I need to enter the user organisationid, which is stored as a user claim. Usually I get it with this:
User.FindFirstValue("OrganisationId")
However, User doesn't exist in the current context.
So I would need to apply the query filter at a later stage, ie. after user authentication? Any pointers where to start with a mid-tier/logic-tier approach?
Granted this is an opinion on architecture, but I break it down like this:
Data-Tier - This tier's responsibility to to access resources (normally) outside the executing application. This includes; Databases, File IO, Web Api's etc.
Business/Logic-Tier - This tier's responsibility (which could be broken down further) should Authenticate, Authorize, Validate and build objects that represent the businesses needs. To build these objects, it may consume one or more data access objects (for example, it may use an IO DA to retrieve the Image from a local file system or Azure storage and a Database DA to retrieve metadata about that image).
Presentation/Exposure-Tier - This tier's responsibility is to wrap and transform the object into the consumers need (winforms, wpf, html, json, xml, binary serialization etc).
By leaving logic out of the data-tier (even in multi-tenant systems) you gain the ability to access data across all systems (and trust me there is a lot of money to be made here).
This is probably way more than I can explain in such a short place and very my opinion. I'm going to be leaving out quite a bit but here goes.
Data-Tier
namespace ProjectsData
{
public interface IProjectDA
{
IProjectDO GetProject(Guid projectId, Guid organizationId);
}
private class ProjectDA : DbContext, IProjectDA
{
public ProjectDA (...)
public IEnumerable<ProjectDO> Projects { get; set; }
protected override void OnModelCreating(ModelBuilder builder) {... }
public IProjectDO GetProject(Guid projectId, Guid organizationId)
{
var result = Projects
.FirstOrDefault(p => p.Id == projectId && OrganizationId = organizationId);
return result;
}
}
public interface IProjectDO{ ... }
private class ProjectDO: IProjectDO
{
public Guid Id { get; set; }
public Guid OrganizationId { get; set; }
public Guid CategoryId { get; set; }
}
}
Logic
namespace ProjectBusiness
{
public interface IProjectBO { .. }
public interface IOrganization
{
Guid OrganizationId { get; }
}
private class ProjectBA : IProjectBO
{
private readonly IProjectDA _projectDA;
private readonly IIdentity _identity;
private readonly IOrganization _organization;
public ProjectLogic(IProjectDA projectDA,
IIdentity identity,
IOrganizationContext organizationContext)
{
_projectDA = projectDA;
_identity = identity;
}
public IProjectBO GetProject(Guid id)
{
var do = _projectDA
.GetProject(id, _organization);
var result = map.To<ProjectBO>(do);
return result;
}
}
public interface IProjectBO { .. }
private class ProjectBO
{
public Guid Id { get; set; }
public Guid OrganizationId { get; set; }
public Guid CategoryId { get; set; }
}
}
So under these circumstances the data layer is aware of type of request, but isn't multi-tenant aware. It isn't limiting all request based on anything. This architecture is advantageous in a number of ways.
First, in the above example, your product takes off and your supervisor wants to know what Categories are the most popular.
namespace StatisticsBusiness
{
public interface IStatisticsBO
{
IEnumerable<ICategoryStatisticBO> CategoryStatistics { get; set; }
}
public interface ICategoryStaticBO
{
Guid CategoryId { get; }
int ProjectCount { get; }
}
private class StatisticsBA : IStatisticsBO
{
private readonly IProjectDA _projectDA;
private readonly IIdentity _identity;
public ProjectLogic(IProjectDA projectDA,
IIdentity identity)
{
_projectDA = projectDA;
_identity = identity;
}
public IEnumerable<IProjectBO GetOrderedCategoryPopularity()
{
var dos = _projectDA
.GetProjectCategoryCounts()
var result = map.To<IEnumerable<IStatisticsBO>>(dos);
return result;
}
}
public interface IStatisticsBO{ .. }
private class StatisticsBO
{
public Guid CategoryId { get; }
public int ProjectCount { get; }
}
}
Note: Some people prefer to pass an expression as a predicate. Both have their advantages and disadvantages. If you decide to go the predicate route, then you'll have to decide if all your Data Access types use predicates or not. Just realize that using predicates against IO or Web Api might be more effort that it's worth.
Secondly, some requirement causes you not to be able to use Entity Framework. You replace it with Dapper or some other new better technology/framework. All you have to create new I<whataver>DA classes because the consuming logic is unaware of anything other than those interfaces (programming against an interface, the L in SOLID programming principles and the I in SOLID programming principles).
I don't use this pattern all the time because for some smaller websites, it's too much work for the payoff.
I will suggest to decompose the solution in tow parts
Add an organization id in your dbcontext, much like a tenant id in multi-tenant env. See this link for example.
Next challenge will be to pass the organization id as a parameter to DbContext constructor. For this you can create a factory for DbContext. Since you store the OrganizationId in claims. The factory can access the same claim HttpContext and pass the organization id as a parameter while instanting the dbContext.
It's not perfect but can give you a starting point.

Polymorphic Mapping of Collections with AutoMapper

TL;DR: I'm having trouble with Polymorphic mapping. I've made a github repo with a test suite that illustrates my issue. Please find it here: LINK TO REPO
I'm working on implementing a save/load feature. To accomplish this, I need to make sure the domain model that I'm serializing is represented in a serialization-friendly way. To accomplish this I've created a set of DTOs that contain the bare-minimum set of information required to do a meaningful save or load.
Something like this for the domain:
public interface IDomainType
{
int Prop0 { get; set; }
}
public class DomainType1 : IDomainType
{
public int Prop1 { get; set; }
public int Prop0 { get; set; }
}
public class DomainType2 : IDomainType
{
public int Prop2 { get; set; }
public int Prop0 { get; set; }
}
public class DomainCollection
{
public IEnumerable<IDomainType> Entries { get; set; }
}
...and for the DTOs
public interface IDto
{
int P0 { get; set; }
}
public class Dto1 : IDto
{
public int P1 { get; set; }
public int P0 { get; set; }
}
public class Dto2 : IDto
{
public int P2 { get; set; }
public int P0 { get; set; }
}
public class DtoCollection
{
private readonly IList<IDto> entries = new List<IDto>();
public IEnumerable<IDto> Entries => this.entries;
public void Add(IDto entry) { this.entries.Add(entry); }
}
The idea is that DomainCollection represents the current state of the application. The goal is that mapping DomainCollection to DtoCollection results in an instance of DtoCollection that contains the appropriate implementations of IDto as they map to the domain. And vice versa.
A little extra trick here is that the different concrete domain types come from different plugin assemblies, so I need to find an elegant way to have AutoMapper (or similar, if you know of a better mapping framework) do the heavy lifting for me.
Using structuremap, I'm already able to locate and load all the profiles from the plugins and configure the applications IMapper with them.
I've tried to create the profiles like this...
public class CollectionMappingProfile : Profile
{
public CollectionMappingProfile()
{
this.CreateMap<IDomainType, IDto>().ForMember(m => m.P0, a => a.MapFrom(x => x.Prop0)).ReverseMap();
this.CreateMap<DtoCollection, DomainCollection>().
ForMember(fc => fc.Entries, opt => opt.Ignore()).
AfterMap((tc, fc, ctx) => fc.Entries = tc.Entries.Select(e => ctx.Mapper.Map<IDomainType>(e)).ToArray());
this.CreateMap<DomainCollection, DtoCollection>().
AfterMap((fc, tc, ctx) =>
{
foreach (var t in fc.Entries.Select(e => ctx.Mapper.Map<IDto>(e))) tc.Add(t);
});
}
public class DomainProfile1 : Profile
{
public DomainProfile1()
{
this.CreateMap<DomainType1, Dto1>().ForMember(m => m.P1, a => a.MapFrom(x => x.Prop1))
.IncludeBase<IDomainType, IDto>().ReverseMap();
}
}
public class DomainProfile2 : Profile
{
public DomainProfile2()
{
this.CreateMap<DomainType2, IDto>().ConstructUsing(f => new Dto2()).As<Dto2>();
this.CreateMap<DomainType2, Dto2>().ForMember(m => m.P2, a => a.MapFrom(x => x.Prop2))
.IncludeBase<IDomainType, IDto>().ReverseMap();
}
}
I then wrote a test suite to make sure that the mapping will behave as expected when its time to integrate this feature with the application. I found whenever DTOs were getting mapped to Domain (think Load) that AutoMapper would create proxies of IDomainType instead of resolving them to the domain.
I suspect the problem is with my mapping profiles, but I've run out of talent. Thanks in advance for your input.
Here's another link to the github repo
I stumbled across this question when looking in to a polymorphic mapping issue myself. The answer is good, but just another option if you'd like to approach it from the base mapping perspective and have many derived classes, you can try the following:
CreateMap<VehicleEntity, VehicleDto>()
.IncludeAllDerived();
CreateMap<CarEntity, CarDto>();
CreateMap<TrainEntity, TrainDto>();
CreateMap<BusEntity, BusDto>();
See the automapper docs for more info.
I spent a little time reorganizing the repo. I went as far as to mimic a core project and two plugins. This made sure that I wouldn't end up with a false-positive result when the tests finally started passing.
What I found was that the solution had two(ish) parts to it.
1) I was abusing AutoMapper's .ReverseMap() configuration method. I was assuming that it would perform the reciprocal of whatever custom mapping I was doing. Not so! It only does simple reversals. Fair enough. Some SO questions/answers about it:
1, 2
2) I wasn't fully defining the mapping inheritance properly. I'll break it down.
2.1) My DomainProfiles followed this pattern:
public class DomainProfile1 : Profile
{
public DomainProfile1()
{
this.CreateMap<DomainType1, IDto>().ConstructUsing(f => new Dto1()).As<Dto1>();
this.CreateMap<DomainType1, Dto1>().ForMember(m => m.P1, a => a.MapFrom(x => x.Prop1))
.IncludeBase<IDomainType, IDto>().ReverseMap();
this.CreateMap<Dto1, IDomainType>().ConstructUsing(dto => new DomainType1()).As<DomainType1>();
}
}
So now knowing that .ReverseMap() is not the thing to use here, it becomes obvious that the map between Dto1 and DomainType1 was poorly defined. Also, The mapping between DomainType1 and IDto didn't link back to the base IDomainType to IDto mapping. Also an issue. The final result:
public class DomainProfile1 : Profile
{
public DomainProfile1()
{
this.CreateMap<DomainType1, IDto>().IncludeBase<IDomainType, IDto>().ConstructUsing(f => new Dto1()).As<Dto1>();
this.CreateMap<DomainType1, Dto1>().IncludeBase<DomainType1, IDto>().ForMember(m => m.P1, a => a.MapFrom(x => x.Prop1));
this.CreateMap<Dto1, IDomainType>().IncludeBase<IDto, IDomainType>().ConstructUsing(dto => new DomainType1()).As<DomainType1>();
this.CreateMap<Dto1, DomainType1>().IncludeBase<Dto1, IDomainType>().ForMember(m => m.Prop1, a => a.MapFrom(x => x.P1));
}
}
Now each direction of the mapping is explicitly defined, and the inheritance is respected.
2.2) The most base mapping for IDomainType and IDto was inside of the profile that also defined the mappings for the "collection" types. This meant that once I had split up the project to mimic a plugin architecture, the tests that only tested the simplest inheritances failed in new ways - The base mapping couldn't be found. All I had to do was put these mappings into their own profile and use that profile in the tests as well. That's just good SRP.
I'll apply what I've learned to my actual project before I mark my own answer as the accepted answer. Hopefully I've got it and hopefully this will be helpful to others.
Useful links:
this
this one was a good refactoring exercise. I admittedly used it as a starting place to build up my example. So, thanks #Olivier.

Persistent Ignorant Domain with Entity Framework and Spacial Data

I'm developing an application that implements DDD and Repository Pattern as shown in diagram bellow:
I expect to keep my Domain Layer persistent ignorant, so I wouldn't like to install entity framework libraries there. The only problem I'm facing is that my application uses spatial data, but I'm not supposed to use DbGeography as a Property Type of my entities, once it belongs to System.Data.Entity.Spatial namespace, from EntityFramework assembly.
Is there a way to create a class to hold latitude, longitude and elevation values in Domain Layer, like that:
public class Location
{
public double Latitude { get; set; }
public double Longitude { get; set; }
public double Elevation { get; set; }
}
and then convert that class to DbGeography in my Repository Layer?
In other words, the domain entities would have only Location class as a property:
public class Place : IEntityBase, ILocalizable
{
public int Id { get; set; }
public string Name { get; set; }
public Location Location { get; set; }
public User Owner { get; set; }
}
and I'd convert it DbGegraphy to persist spatial data and do some calculations only in repository layer. My plans was try something like that to convert:
public class LocationMap : ComplexTypeConfiguration<Location>
{
public LocationMap()
{
Property(l => DbGeography.FromText(string.Format("POINT({0} {1})", l.Longitude, l.Latitude))).HasColumnName("Location");
Ignore(l => l.Elevation);
Ignore(l => l.Latitude);
Ignore(l => l.Longitude);
}
}
But it doesn't work and never will. How I can solve this problem? What are the best practices in this situation?
Thank you
Well, I don't know "right" way, but, i have a tricky idea. I hope, it'll help you or give some more variants:
Ypu have domain entity Place, it's fully persistent ignorant and it's place in Domain assembly. Good.
Lets create one more Place class in Repository assembly:
internal sealed class EFPlace : Place
{
DbGeography EFLocation
{
get
{
return DbGeography.FromText(string.Format("POINT({0} {1})", Location.Longitude, Location.Latitude);
}
set
{
//vice versa convertion, I don't know, how to do it :)
}
}
}
We created special class for Entity Framework, and map it:
public class PlaceMap : ComplexTypeConfiguration<EFPlace>
{
public PlaceMap ()
{
Property(p => p.EFLocation).HasColumnName("Location");
Ignore(p => p.Location);
}
}
But, we have to convert from Place to EFPlace on save in repository. You can create special constructor, or casting method.
Another variant - create partial class Place in Domain and Repository assemblies. And add needed propery in Repository one class and so on.
Well, it looks ugly :( but, I don't know "pure", real-life examples of Persistent Ignorant Domain. We always have limitations of Entity Framework. NHibernate has a little more features.

I need to bind with Ninject to database and inject them to same repository in same time

Here some parts of my code
class NinjectWebCommon place where I need bind dada contexts.
It is just piece of the code, not complete classes.
private static void RegisterServices(IKernel kernel)
{
kernel.BindSharpRepository();
RepositoryDependencyResolver.SetDependencyResolver(new NinjectDependencyResolver(kernel));
kernel.Bind<DbContext>().To<EntitiesDbOne>().InRequestScope();
//kernel.Bind<DbContext>().To<EntitiesDbTwo>().InRequestScope();
}
Category Repository class where i need to have two databases in same time
public class CategoryRepository : ConfigurationBasedRepository<CategoryData, int>, ICategoryRepository
{
private readonly EntitiesDbOne _ctxOne;
private readonly EntitiesDbTwo _ctxTwo;
public CategoryRepository(EntitiesDbOne ctxOne, EntitiesDbTwo ctxTwo)
{
_ctxOne= ctxOne;
_ctxTwo= ctxTwo;
}
public CategoryData GetById(int Id)
{
//dummy data, just for usage two different dcContexts
var category = _ctxOne.Categories.Include((string) (x => x.MetaTags)).FirstOrDefault(x => x.Id == Id);
var categoryName = _ctxTwo.category.FirstOrDefault(x => x.Id == category.Id);
return category;
}
In my project I use SharpRepository(ConfigurationBasedRepository) and I use UnitOfWork. I think I can skip UnitOfWork because EntityFramework is doing all this (UnitOfWork patterns) job.
Booth of databases is EntityFramework one is CodeFirst approach other (DbTwo) modelFirst
public class EntitiesDbOne : DbContext
{
public DbSet<CategoryData> Categories { get; set; }
}
public partial class EntitiesDbTwo: DbContext
{
public EntitiesDbTwo()
: base("name=EntitiesDbTwo")
{
}
public DbSet<attributenames> attributenames { get; set; }
public DbSet<category> category { get; set; }
}
Please give me some links to examples I can use, I do not think I can manage this with simple explanation. Before I wrote this question, I had search for the answer here on the
Multiple DbContexts in N-Tier Application
EF and repository pattern - ending up with multiple DbContexts in one controller - any issues (performance, data integrity)?
This question by the name is complete in my situation but the code for me, is far different.
Multiple dbcontexts with Repository, UnitOfWork and Ninject
This one is multiple databases but use one per request, and I need two databases in the same request.
http://blog.staticvoid.co.nz/2012/1/9/multiple_repository_data_contexts_with_my_repository_pattern
site and other place.
I read about 20 suggestions, there are two close to my situation but probably not enough.
just as a basis for discussion:
What if you use:
kernel.Bind<EntitiesDbOne>().ToSelf().InRequestScope();
kernel.Bind<EntitiesDbTwo>().ToSelf().InRequestScope();
and use it like
public class CategoryRepository : ConfigurationBasedRepository<CategoryData, int>, ICategoryRepository
{
public CategoryRepository(EntitiesDbOne ctxOne, EntitiesDbTwo ctxTwo)
{
}
}
what's the issue then?

which design is better for a client/server project with lots of data sharing [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Let's say we have a project that will handle lots of data (employees, schedules, calendars....and lots more). Client is Windows App, Server side is WCF. Database is MS SQL Server. I am confused regarding which approach to use. I read few articles and blogs they all seem nice but I am confused. I don't want to start with one approach and then regret not choosing the other. The project will have around 30-35 different object types. A lot of Data retrieving to populate different reports...etc
Approach 1:
// classes that hold data
public class Employee
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
.....
}
public class Assignment
{
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
.....
}
.....
Then Helper classes to deal with data saving and retrieving:
public static class Employees
{
public static int Save(Employee emp)
{
// save the employee
}
public static Employee Get(int empId)
{
// return the ugly employee
}
.....
}
public static class Assignments
{
public static int Save(Assignment ass)
{
// save the Assignment
}
.....
}
FYI, The object classes like Employees and Assignment will be in a separate Assembly to be shared between Sever and Client.
Anyway, with this approach I will have a cleaner objects. The Helper classes will do most of the job.
Approach 2:
// classes that hold data and methods for saving and retrieving
public class Employee
{
// constructors
public Employee()
{
// Construct a new Employee
}
public Employee(int Id)
{
// Construct a new Employee and fills the data from db
}
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
.....
public int Save()
{
// save the Employee
}
.....
}
public class Assignment
{
// constructors
public Assignment()
{
// Construct a new assignment
}
public Assignment(int Id)
{
// Construct a new assignment and fills the data from db
}
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
.....
public int Save()
{
// save the Assignment
}
.....
}
.....
With this approach, Each object will do its own job.. Data still can be transferred from WCF to client easily since WCF will only share properties.
Approach 3:
Using Entity Framework.. beside the fact that I never worked with it (which is nice since I have to learn something new) I will need to create POCOs to transfer data between client and WCF..
Now, Which is better? more options?
Having peristence logic in object itself is always a bad idea.
I would use first aproach. It looks like Repository pattern. This way, you can easily debug peristing of data, because it will be clearly separated from rest of the logic of the object.
I would suggest using Entity Framework + Repository pattern. This way your entities are simple objects without any logic in them. All retrieve-save logic stays in repository. I have some successful experience with using generic repository, which is typed with entity, something similar is described here (generic repository part of the article). This way you write repository code only once and you can reuse it for all entities you have. E.g.:
interface IRepositry<T>
{
T GetById(long id);
bool Save(T entity);
}
public class Repository<T> : IRepository<T> {...}
var repository = new Repository<MyEntity>();
var myEntity = repository.GetById(1);
var repository2 = new Repository<MySecondEntity>();
var mySecondEntity = repository.GetById(1);
Whenever an entity needs some very specific operation, you can add this operation to a concrete typed implementation of IRepository:
interface IMySuperRepositry : IRepository<MySuperEntity>
{
MySuperEntity GetBySuperProperty(SuperProperty superProperty);
}
public class MySuperEntityRepository : Repository, IMySuperRepository
{...}
To create repositories it is nice to use a factory, which is based for example on configuration file. This way you can switch implementation of repositories, e.g. for unit testing, when you do not want to use repository that really accesses DB:
public class RepositoryFactory
{
IRepository<T> GetRepository<T>()
{
if (config == production)
return new Repository<T>(); // this is implemented with DB access through EF
if (config == test)
return new TestRepository<T>(); // this is implemented with test values without DB access
}
}
}
}
You can add validation rules for saving and further elaborate on this. EF also lets you add some simple methods or properties to generated entities, because all of them are partial classes.
Furthermore using POCOs or STEs (see later) it is possible to have EDMX DB model in one project, and all your entities in another project and thus distribute this DLL to client (which will contain ONLY your entities). As I understood, that's what you also want to achieve.
Also seriously consider using Self tracking entities (and not just POCOs). In my opinion they are great for usage with WCF. When you get an entity from DB and pass it to the client, client changes it and gives it back, you need to know, if entity was changed and what was changed. STEs handle all this work for you and are designed specifically for WCF. You get entity from client, say ApplyChanges and Save, that's it.
What about implementing the Save as an extension method? That way your classes are clean as in the first option, but the methods can be called on the object as in the second option.
public static class Employee
{
public static int Save(this Employee emp)
{
// save the employee
}
public static Employee Get(int empId)
{
// return the ugly employee
}
}
you're over thinking this. trying to apply technologies and patterns "just because" or "that's what they say" only makes the solution complicated. The key is designing the application so that it can easily adapt to change. that's probably an ambiguous answer, but it's what it all comes down to. how much effort is required to maintain and/or modify the code base.
currently it sounds like the patterns and practices are the end result, instead of a means to an end.
Entity Framework is a great tool but is not necessarily the best choice in all cases. It will depend on how much you expect to read/write from the database vs how much you expect to read/write to your WCF services. Perhaps someone better-versed in the wonderful world of EF will be able to help you. To speak from experience, I have used LINQ-TO-SQL in an application that features WCF service endpoints and had no issues (and in fact came to LOVE Linq-To-Sql as an ORM).
Having that said, if you decide that EF is not the right choice for you, it looks like you're on the right track with Approach 1. However, I would recommend implementing a Data Access Layer. That is, implement a Persist method in your business classes that then calls methods in a separate DAO (Data Access Object, or a class used to persist data from a business object) to actually save it to your database.
A sample implementation might look like this:
public class Employee
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public void Persist()
{
EmployeeDAO.Persist(this);
}
}
public class Assignment
{
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
public void Persist()
{
AssignmentDAO.Persist(this);
}
}
public static class EmployeeDAO
{
public static int Persist(Employee emp)
{
// insert if new, else update
}
public static Employee Get(int empId)
{
// return the ugly employee
}
.....
}
public static class AssignmentDAO
{
public static int Persist(Assignment ass)
{
// insert if new, else update
}
.....
}
The benefit to a pattern like this is that you get to keep your business classes clean, your data-access logic separate, while still giving the objects the easy syntax of being able to write new Employee(...).Persist(); in your code.
If you really want to go nuts, you could even consider implementing interfaces on your Persistable classes, and have your DAO(s) accept those IPersistable instances as arguments.

Categories