I started using AutoFixture for building my test suite, and I'm pretty convinced that this is what I should be using
to make my tests clear, however, there are a couple of things which I simply don't know how to implement this.
First, let me try you to explain the concept.
I do have class which represents a "Company" entity.
public sealed class Compnay
{
public string Name { get; set; }
public DateTime FoundingDate { get; set; }
public List<Person> Persons { get; set; }
}
And I do have a class of "Person" entities, which represents the persons which are working in a specific company.
public sealed class Person
{
public DateTime DateOfBirth { get; set; }
public DateTime DateOfMarriage { get; set; }
}
Now, I do have an interface to abstract away the current Date/Time.
public interface IDateTimeProvider
{
DateTime Now { get; }
}
And I have a function that queries the companies where there are persons working that are born in the current year.
IEnumerable<Company> Get()
{
return this.DB.Companies.Include(x => x.Persons)
.Where(x => x.DateOfBirth.Year == this.dateTimeProvider.Now)
.Select(x => new {
// ... Implementation ...
});
}
Now, in my unit test, I would like to verify that the entities which are returned are correct.
So I need AutoFixture to generate a random date (because I need to have random dates, so ensure that my code does work
with different Date/Time(s)).
But the problem is that the rest of my test needs to have access to this date because, in order to built my assertion, I
need to calculate which persons are going to be returned (which is dependent) on the current Date/Time.
One option would be to freeze the Date/Time(s) which are created by AutoFixture, but than suddenly, all Date/Times, even the founding date of a company would be this date, which is something which I don't want, since my query might be dependent on that also.
How can I tackle this problem?
Might be important to know that I'm using the "AutoData" attribute to avoid having Fixture configuration inside my tests.
I think that instead of injecting the IDateTimeProvider in your repository, you should instead pass the filtering arguments as parameters into your Get() method. This way you could just pick a date out of the entities generated by AutoFixture and you don't need to freeze values at all.
/* repository */
public IEnumerable<Company> Get(DateTime foundingDate)
{
return this.context.Companies.Include(x => x.Persons)
.Where(x => x.FoundingDate == foundingDate)
.Select(x => x);
}
/* test method */
[Theory, PersistenceData]
public void Foo(
List<Company> companies, [Frozen]MyContext context,
CompaniesRepository repository)
{
context.Companies.AddRange(companies);
context.SaveChanges();
var actual = repository.Get(companies[2].FoundingDate);
Assert.Equal(new[] { companies[2] }, actual);
}
In this example I am using PersistenceData that is a auto data attribute created with EntityFrameworkCore.AutoFixture.
Related
I have my Unit of Measure which users fill in and save, they can then save a list of Unit Sizes which has its own table and is a foreign key to the Unit Of Measure. When I am fetching all the data back, the Unit Size value is coming back blank.
I have read a half dozen ways to do this and I am not comprehending them. The one that makes the most sense to me is using a Queryable extension so I am trying to go that route but my code still hasn't quite gotten there.
Here is where I am at - these are my entities:
namespace Mudman.Data.Entities
{
[Table("UnitOfMeasure")]
public class UnitOfMeasure : IEntityBase, IAuditBase
{
[Key]
[Column("UnitOfMeasureId")]
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public string Id { get; set; }
[Required]
[ForeignKey("TenantId")]
public string TenantId { get; set; }
[JsonIgnore]
public virtual Tenant Tenant { get; set; }
public string Name { get; set; }
public virtual IEnumerable<UnitOfMeasureSize> UnitSize { get; set; }
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public DateTime CreateDate { get; set; } = DateTime.UtcNow;
[StringLength(255)]
public string CreateUserId { get; set; }
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public DateTime UpdateDate { get; set; }
[StringLength(255)]
public string UpdateUserId { get; set; }
}
}
Unit Of Measure size entity:
namespace Mudman.Data.Entities
{
[Table("UnitOfMeasureSize")]
public class UnitOfMeasureSize : IEntityBase, IAuditBase
{
[Key]
[Column("UnitOfMeasureSize")]
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public string Id { get; set; }
[Required]
[ForeignKey("TenantId")]
public string TenantId { get; set; }
[JsonIgnore]
public virtual Tenant Tenant { get; set; }
[Required]
[ForeignKey("UnitOfMeasureId")]
public string UnitOfMeasureId { get; set; }
public virtual UnitOfMeasure UnitOfMeasure { get; set; }
[Required]
public int UnitSize { get; set; }
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public DateTime CreateDate { get; set; } = DateTime.UtcNow;
[StringLength(255)]
public string CreateUserId { get; set; }
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public DateTime UpdateDate { get; set; }
[StringLength(255)]
public string UpdateUserId { get; set; }
}
}
Unit Of Measure Repository including Unit Size:
namespace Mudman.Repository
{
public class UnitOfMeasureRepository : EntityBaseRepository<UnitOfMeasure>,
IUnitOfMeasureRepository
{
MudmanDbContext context;
public UnitOfMeasureRepository(MudmanDbContext context) : base(context)
{
{ this.context = context; };
}
public IEnumerable<UnitOfMeasure> GetAllUnitsOfMeasure(string TenantId)
{
var result = context.UnitOfMeasure
.Where( uom => uom.TenantId == TenantId)
.Include(uom => uom.UnitSize);
return result;
}
}
}
My GetAllAsync method in my service:
public Task<IEnumerable<UnitOfMeasureViewModel>> GetAllAsync()
{
var result = _unitOfMeasureRepo.GetAllUnitsOfMeasure(TenantId);
result.OrderBy(r => r.Name);
return _mapper.Map<List<UnitOfMeasure>, List<UnitOfMeasureViewModel>>(result.ToList());
}
AutoMapper Code:
CreateMap<UnitOfMeasure, UnitOfMeasureViewModel>().ReverseMap()
.ForMember(dest => dest.UnitSize, uos => uos.Ignore())
.ForMember(uom => uom.UnitSize, src => src.MapFrom(uom => uom.UnitSize));
There are a few issues with your attempts so far.
Firstly, your GetAllAsync looks like it wants to be an async method but you have it making entirely synchronous calls, and hasn't been marked as async. I would avoid diving into asynchronous methods until you have the fundamentals of retrieving your data down.
What we cannot see from your example is the mapping between your unit of measure entity and the view model. The entity has a one-to-many relationship between unit of measure and UnitSizes, so what gets updated depends on how the view model is laid out and configured for mapping. This is most likely the root of your problem where your view model mapping from the entity is likely relying on a convention that isn't pairing up with the data you expect.
Performance wise, this approach will run into problems as your data model grows in terms of entities and rows. The fundamental problem with using a repository like this is that a method like this:
IEnumerable<UnitOfMeasure> GetAllUnitsOfMeasure(string TenantId)
will load all data into memory and you explicitly need to include related entities, whether the consumer will want them or not, which adds to the amount of work the queries need to do and the memory required. If TenantId is for something like a multi-tenant database such as in a SaaS application with multiple tenants using a single data source, this is a good reason to adopt a Repository pattern, but I would not pass tenantIds around as parameters. Instead, have the repository accept a dependency that can validate and resolve the current TenantId from the session. This way the repository can always ensure that the current tenant rules are validated and applied for every query without worrying about where the caller might have gotten a TenantId from. (I.e accepting a TenantId from a POST request would be bad as that value could easily be tampered with)
To address performance and probably touch on what you had read about IQueryable extensions, rather than returning IEnumerable<TEntity> from a repository, you can return IQueryable<TEntity>. The advantages here are that you can still have the repository add base filtering rules like the tenantID, and allow the consumer to handle things like sorting and projection.
For example, the repository looks more like:
public class UnitOfMeasureRepository : IUnitOfMeasureRepository
{
private readonly MudmanDbContext _context;
private readonly ICurrentUserLocator _currentUserLocator;
public UnitOfMeasureRepository(MudmanDbContext context, ICurrentUserLocator currentUserLocator )
{
_context = context ?? throw new ArgumentNullException("context");
_currentUserLocator = currentUserLocator ?? throw new ArgumentNullException("currentUserLocator");
}
public IQueryable<UnitOfMeasure> GetUnitsOfMeasure()
{
var tenantId = _currentUserLocator.CurrentUserTenantId; // Checks session for current user and retrieves a tenant ID or throws an exception. (no session, etc.)
var query = _context.UnitOfMeasure
.Where( uom => uom.TenantId == tenantId)
return query;
}
}
The changes to note here is that we do away with the base generic repository class. This was confusing as you were passing the context to a base class then setting a local context instance as well. Generic repositories with EF are a bad code smell as they lead to either very complex code, very poor performing code, or both. There is a CurrentUserLocator with the container can inject which is a simple class that can verify that a user is currently authenticated and can return their Tenant ID. From there we will return an IQueryable<UnitOfMeasure> which has a base filter for the TenantID which will allow our consumers to make up their own minds how they want to consume it. Note that we do not need to use Include for related entities, again the consumers can decide what they need.
Calling the new repository method and projecting your view models looks fairly similar to what you had. It looks like you are using Automapper, rather than using .Map() we can use .ProjectTo() with the IQueryable and Automapper can essentially build a Select() expression to pull back only the data that the view model will need. To use ProjectTo extension method we do need to provide it with the MappingConfiguration that was used to create your mapper and that will tell it how to build the ViewModel. (So rather than having a dependency of type 'Mapper' you will need one for the MapperConfiguration you set up for that mapper.)
public IEnumerable<UnitOfMeasureViewModel> GetAll()
{
var models = _unitOfMeasureRepo.GetUnitsOfMeasure()
.OrderBy(r => r.Name)
.ProjectTo<UnitOfMeasureViewModel>(_mapperConfiguration)
.ToList();
}
What this does is call our repository method to get the IQueryable, which we can then append the ordering we desire, and call ProjectTo to allow Automapper to populate the view models before executing the query with ToList(). When using Select or ProjectTo we don't need to worry about using Include to eager load related data that might be mapped, these methods take care of loading data related entities if/when needed automatically.
Even in cases where we want to use a method like this to update entities with related entities, using IQueryable works there to:
public void IncrementUnitSize(string unitOfMeasureId)
{
var unitOfMeasure = _unitOfMeasureRepo.GetUnitsOfMeasure()
.Include(r => r.UnitSizes)
.Where(r => r.Id == unitOfMeasureId)
.Single();
foreach(var unitSize in unitOfMeasure.UnitSizes)
unitSize.UnitSize += 1;
_context.SaveChanges();
}
Just as an example of fetching related entities as needed, versus having a method that returns IEnumerable and needs to eager load everything just in case some caller might need it.
These methods can very easily be translated into an asyncronous method without touching the repository:
public async Task<IEnumerable<UnitOfMeasureViewModel>> GetAll()
{
var models = await _unitOfMeasureRepo.GetAllUnitsOfMeasure(TenantId)
.OrderBy(r => r.Name)
.ProjectTo<UnitOfMeasureViewModel>(_mapperConfiguration)
.ToListAsync();
}
... and that is all! Just remember that async doesn't make the call faster, if anything it makes it a touch slower. What it does is make the server more responsive by allowing it to move the request handling to a background thread and free the request thread to pick up a new server request. That is great for methods that are going to take a bit of time, or are going to get called very frequently to avoid tying down all of the server request threads leading to timeouts for users waiting for a response from the server. For methods that are very fast and aren't expected to get hammered by a lot of users, async doesn't add a lot of value and you need to ensure every async call is awaited or you can end up with whacky behaviour and exceptions.
I need some advice on the a question I have been battling with on DDD .
I have a domain model which is a aggregate root
public class Objective {
public int ObjectiveId { get; private set; }
public string ObjectiveDescription { get; private set; }
public DateTime StartDate { get; private set; }
public DateTime TargetDate { get; private set; }
public DateTime? CompletedDate { get; private set; }
public int EmploymentId { get; private set; }
public List<Skill> RelatedSkills { get; private set; }
public List<Goal> RelatedGoals { get; private set; }
// few more properties.
}
I have 2 views one is a List View and another is a details View.
The List View has a IEnumerable which has just 3 fields
class ObjectiveListVM{
public int ObjectiveId { get; private set; }
public string ObjectiveDescription { get; private set; }
public DateTime StartDate { get; private set; }
}
The Details view has a ObjectiveDetailViewModel which has 90 percent of the fields from the Objective domain model plus a few more.
I have a repository which gets me either a list or one objective
IObjectiveRepo
{
Objective GetById();
IEnumerable<Objective> GetList();
}
This is how I have implemented DDD and Repository pattern.
My question is this , my GetList query is really expensive , it only needs data from 3 columns but since my Repositories should always return domain objects , I End up returning a list the entire Objective domain object which has child lists and lots of fields.
The solution I thought of is to have another ObjectiveSummary domain model which just has a few fields and is returned by the GetList repo method. But it then breaks some other principles of DDD mainly that ObjectiveSummary is a Anemic Domain model . In fact its not really a model , its more of a DTO in my head.
This is such a common scenario that I feel I am missing something very basic in my implementation or interpretation of DDD / repository patterns .
Can some of the experts point out the mistake i have made in the implementation or highlight a way to address this problem without expensive queries ?
Note : I can thinka few ways of getting around this problem.But I am more interested in finding the correct way which does not break the principles of the architecture/pattern that I am using.
You should not query your domain model. An aggregate is always loaded in its entirety so it does not lend itself well to querying.
As soon as you think about lazy-loading you are probably not using an optimal approach. Lazy-loading is evil. Don't do it :)
What you are after is a query layer of sorts. This is directly related to CQRS. The query side only returns data. It has no behaviour and you return the most basic structure that you can. In the C# world that I am also in I use a DataRow or IEnumerable<DataRow>. If it is really complex I may opt for a DTO:
public interface IObjectiveQuery
{
DataRow ForList(int id);
bool Contains(string someUniqueKey);
IEnumerable<DataRow> Search(string descriptionLike, DateTime startDate);
string Description(int id);
}
Give it a go. I think you'll find it simplifies things immensely. Your domain should only be concerned about the command/write side of things.
One way to deal with it is to return IQueryable<Objective> from your repository instead of IEnumerable<Objective>.
public interface IObjectiveRepository
{
Objective GetById();
IQueryable<Objective> GetList();
}
It will allow you to keep the repository simple and add more logic to the queries in application/domain layer without performance loss. The following query will be executed on the db server, including the projection to ObjectiveListVM:
public IReadOnlyList<ObjectiveListVM> GetSummaryList()
{
return _repository
.GetList()
.Select(o => new ObjectiveListVM
{
ObjectiveId = o.ObjectiveId,
ObjectiveDescription = o.ObjectiveDescription,
StartDate = o.StartDate
})
.ToList();
}
You can use Automapper's Queryable extensions to make the projection to VMs easier.
return _repository
.GetList()
.ProjectTo<ObjectiveListVM>()
.ToList();
I am new to WebApi, so please excuse if the question is amateurish: I use AngularJS's "$resource" to communicate with the WebApi-Controller "BondController". This works great.
My problem: The entity "Bond" has a reference to a list of entity "Price":
public class Bond
{
public int ID { get; set; }
...
public virtual List<Price> Prices { get; set; }
}
What I am looking for is a way to exclude the nested list "Prices" such as
[JsonIgnore]
BUT, in some other situation, I still need a way to retrieve Bonds including this nested list, e.g. via a second controller "Bond2".
What can I do?
Will I need some ViewModel on top of the entity Bond?
Can I somehow exclude the List of Prices in the controller itself:
public IQueryable<Bond> GetBonds()
{
return db.Bonds [ + *some Linq-Magic that excludes the list of Prices*]
}
Background: the list of Prices might become rather long and the Get-Requests would easily become > 1MB. In most cases, the prices don't even need to be displayed to the user, so I'd like to exclude them from the response. But in one case, they do... Thank you for your input!
EDIT:
I see that, for some sort of Linq Magic, I would need a new type "PricelessBond"
EDIT2
Found a nice example of using DTO here and will use that.
The solution is to create a non-persistent BondDTO class that acts as a "shell" and that has only those properties you desire to be visible in a certain use-case and then, in the BondDTOController, transform the selection of Bond => BondDTO via means of a Linq Lambda Select expression.
I am no expert in WebApi but it seems that you have more than one problem.
Why won't you create a class hierarchy?
public class PricelessBond // :)
{
public int ID {get; set;}
}
public class Bond : PricelessBond
{
public List<Price> Prices {get; set;}
}
Then you can expose data via two different methods:
public class BondsController : ApiController
{
[Route("api/bonds/get-bond-without-price/{id}")]
public PricelessBond GetBondWithoutPrice(int id)
{
return DataAccess.GetBondWithoutPrice(id);
}
[Route("api/bonds/get-bond/{id}")]
public Bond GetBond()
{
return DataAccess.GetBond(id);
}
}
And in your DataAccess class:
public class DataAccess
{
public PricelessBond GetBondWithoutPrice(int id)
{
return db.Bonds
.Select(b => new PricelessBond
{
ID = b.ID
})
.Single(b => b.ID == id);
}
public Bond GetBond(int id)
{
return db.Bonds
.Select(b => new Bond
{
ID = b.ID,
Prices = b.Prices.Select(p => new Price{}).ToArray()
})
.Single(b => b.ID == id);
}
}
Of course, having two data access methods implies some code overhead but since you say the response could get greater than 1MB this also means that you should spare your database server and not fetch data that you don't need.
So, in your data access layer load only required data for each operation.
I have tested this in a scratch project and it worked.
Can anyone provide an easier more automatic way of doing this?
I have the following save method for a FilterComboTemplate model. The data has been converted from json to a c# model entity by the webapi.
So I don't create duplicate entries in the DeviceProperty table I have to go through each filter in turn and retrieve the assigned DeviceFilterProperty from the context and override the object in the filter. See the code below.
I have all the object Id's if they already exist so it seems like this should be handled automatically but perhaps that's just wishful thinking.
public void Save(FilterComboTemplate comboTemplate)
{
// Set the Device Properties so we don't create dupes
foreach (var filter in comboTemplate.Filters)
{
filter.DeviceProperty = context.DeviceFilterProperties.Find(filter.DeviceFilterProperty.DeviceFilterPropertyId);
}
context.FilterComboTemplates.Add(comboTemplate);
context.SaveChanges();
}
From here I'm going to have to check whether any of the filters exist too and then manually update them if they are different to what's in the database so as not to keep creating a whole new set after an edit of a FilterComboTemplate.
I'm finding myself writing a lot of this type of code. I've included the other model classes below for a bit of context.
public class FilterComboTemplate
{
public FilterComboTemplate()
{
Filters = new Collection<Filter>();
}
[Key]
public int FilterComboTemplateId { get; set; }
[Required]
public string Name { get; set; }
[Required]
public ICollection<Filter> Filters { get; set; }
}
public class Filter
{
[Key]
public int FilterId { get; set; }
[Required]
public DeviceFilterProperty DeviceFilterProperty { get; set; }
[Required]
public bool Exclude { get; set; }
[Required]
public string Data1 { get; set; }
}
public class DeviceFilterProperty
{
[Key]
public int DeviceFilterPropertyId { get; set; }
[Required]
public string Name { get; set; }
}
Judging from some similar questions on SO, it does not seem something EF does automatically...
It's probably not a massive cut on code but you could do something like this, an extension method on DbContext (or on your particular dataContext):
public static bool Exists<TEntity>(this MyDataContext context, int id)
{
// your code here, something similar to
return context.Set<TEntity>().Any(x => x.Id == id);
// or with reflection:
return context.Set<TEntity>().Any(x => {
var props = typeof(TEntity).GetProperties();
var myProp = props.First(y => y.GetCustomAttributes(typeof(Key), true).length > 0)
var objectId = myProp.GetValue(x)
return objectId == id;
});
}
This will check if an object with that key exists in the DbContext. Naturally a similar method can be created to actually return that entity as well.
There are two "returns" in the code, just use the one you prefer. The former will force you to have all entities inherit from an "Entity" object with an Id Property (which is not necessarily a bad thing, but I can see the pain in this... you will also need to force the TEntity param: where TEntity : Entity or similar).
Take the "reflection" solution with a pinch of salt, first of all the performance may be a problem, second of all I don't have VS running up now, so I don't even know if it compiles ok, let alone work!
Let me know if that works :)
It seems that you have some common operations for parameters after it's bound from request.
You may consider to write custom parameter bindings to reuse the code. HongMei's blog is a good start point: http://blogs.msdn.com/b/hongmeig1/archive/2012/09/28/how-to-customize-parameter-binding.aspx
You may use the code in Scenario 2 to get the formatter binding to deserialize the model from body and perform the operations your want after that.
See the final step in the blog to specify the parameter type you want customize.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Let's say we have a project that will handle lots of data (employees, schedules, calendars....and lots more). Client is Windows App, Server side is WCF. Database is MS SQL Server. I am confused regarding which approach to use. I read few articles and blogs they all seem nice but I am confused. I don't want to start with one approach and then regret not choosing the other. The project will have around 30-35 different object types. A lot of Data retrieving to populate different reports...etc
Approach 1:
// classes that hold data
public class Employee
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
.....
}
public class Assignment
{
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
.....
}
.....
Then Helper classes to deal with data saving and retrieving:
public static class Employees
{
public static int Save(Employee emp)
{
// save the employee
}
public static Employee Get(int empId)
{
// return the ugly employee
}
.....
}
public static class Assignments
{
public static int Save(Assignment ass)
{
// save the Assignment
}
.....
}
FYI, The object classes like Employees and Assignment will be in a separate Assembly to be shared between Sever and Client.
Anyway, with this approach I will have a cleaner objects. The Helper classes will do most of the job.
Approach 2:
// classes that hold data and methods for saving and retrieving
public class Employee
{
// constructors
public Employee()
{
// Construct a new Employee
}
public Employee(int Id)
{
// Construct a new Employee and fills the data from db
}
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
.....
public int Save()
{
// save the Employee
}
.....
}
public class Assignment
{
// constructors
public Assignment()
{
// Construct a new assignment
}
public Assignment(int Id)
{
// Construct a new assignment and fills the data from db
}
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
.....
public int Save()
{
// save the Assignment
}
.....
}
.....
With this approach, Each object will do its own job.. Data still can be transferred from WCF to client easily since WCF will only share properties.
Approach 3:
Using Entity Framework.. beside the fact that I never worked with it (which is nice since I have to learn something new) I will need to create POCOs to transfer data between client and WCF..
Now, Which is better? more options?
Having peristence logic in object itself is always a bad idea.
I would use first aproach. It looks like Repository pattern. This way, you can easily debug peristing of data, because it will be clearly separated from rest of the logic of the object.
I would suggest using Entity Framework + Repository pattern. This way your entities are simple objects without any logic in them. All retrieve-save logic stays in repository. I have some successful experience with using generic repository, which is typed with entity, something similar is described here (generic repository part of the article). This way you write repository code only once and you can reuse it for all entities you have. E.g.:
interface IRepositry<T>
{
T GetById(long id);
bool Save(T entity);
}
public class Repository<T> : IRepository<T> {...}
var repository = new Repository<MyEntity>();
var myEntity = repository.GetById(1);
var repository2 = new Repository<MySecondEntity>();
var mySecondEntity = repository.GetById(1);
Whenever an entity needs some very specific operation, you can add this operation to a concrete typed implementation of IRepository:
interface IMySuperRepositry : IRepository<MySuperEntity>
{
MySuperEntity GetBySuperProperty(SuperProperty superProperty);
}
public class MySuperEntityRepository : Repository, IMySuperRepository
{...}
To create repositories it is nice to use a factory, which is based for example on configuration file. This way you can switch implementation of repositories, e.g. for unit testing, when you do not want to use repository that really accesses DB:
public class RepositoryFactory
{
IRepository<T> GetRepository<T>()
{
if (config == production)
return new Repository<T>(); // this is implemented with DB access through EF
if (config == test)
return new TestRepository<T>(); // this is implemented with test values without DB access
}
}
}
}
You can add validation rules for saving and further elaborate on this. EF also lets you add some simple methods or properties to generated entities, because all of them are partial classes.
Furthermore using POCOs or STEs (see later) it is possible to have EDMX DB model in one project, and all your entities in another project and thus distribute this DLL to client (which will contain ONLY your entities). As I understood, that's what you also want to achieve.
Also seriously consider using Self tracking entities (and not just POCOs). In my opinion they are great for usage with WCF. When you get an entity from DB and pass it to the client, client changes it and gives it back, you need to know, if entity was changed and what was changed. STEs handle all this work for you and are designed specifically for WCF. You get entity from client, say ApplyChanges and Save, that's it.
What about implementing the Save as an extension method? That way your classes are clean as in the first option, but the methods can be called on the object as in the second option.
public static class Employee
{
public static int Save(this Employee emp)
{
// save the employee
}
public static Employee Get(int empId)
{
// return the ugly employee
}
}
you're over thinking this. trying to apply technologies and patterns "just because" or "that's what they say" only makes the solution complicated. The key is designing the application so that it can easily adapt to change. that's probably an ambiguous answer, but it's what it all comes down to. how much effort is required to maintain and/or modify the code base.
currently it sounds like the patterns and practices are the end result, instead of a means to an end.
Entity Framework is a great tool but is not necessarily the best choice in all cases. It will depend on how much you expect to read/write from the database vs how much you expect to read/write to your WCF services. Perhaps someone better-versed in the wonderful world of EF will be able to help you. To speak from experience, I have used LINQ-TO-SQL in an application that features WCF service endpoints and had no issues (and in fact came to LOVE Linq-To-Sql as an ORM).
Having that said, if you decide that EF is not the right choice for you, it looks like you're on the right track with Approach 1. However, I would recommend implementing a Data Access Layer. That is, implement a Persist method in your business classes that then calls methods in a separate DAO (Data Access Object, or a class used to persist data from a business object) to actually save it to your database.
A sample implementation might look like this:
public class Employee
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public void Persist()
{
EmployeeDAO.Persist(this);
}
}
public class Assignment
{
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
public void Persist()
{
AssignmentDAO.Persist(this);
}
}
public static class EmployeeDAO
{
public static int Persist(Employee emp)
{
// insert if new, else update
}
public static Employee Get(int empId)
{
// return the ugly employee
}
.....
}
public static class AssignmentDAO
{
public static int Persist(Assignment ass)
{
// insert if new, else update
}
.....
}
The benefit to a pattern like this is that you get to keep your business classes clean, your data-access logic separate, while still giving the objects the easy syntax of being able to write new Employee(...).Persist(); in your code.
If you really want to go nuts, you could even consider implementing interfaces on your Persistable classes, and have your DAO(s) accept those IPersistable instances as arguments.