i have a general question regarding naming convention.
if I separate the data and operations into two separate classes. one has the data elements (entity), the other class manipulates the entity class. what do we usually call that class that manipulates the entity class?
(the entity I am referring to has nothing to do with any kind of entity framework)
manager? controller? operator? manipulator?
thanks in advance
It depends on what kind of operations you're doing on those data contracts/entities. Here are some of my conventions. Let's use the example of a Fruit entity (and I'm not trying to imply these are all static methods, just pseudocode):
Repository: provides CRUD operations on a piece of fruit
FruitRepository.Save(Fruit item);
Manager: operations outside of simple CRUD.
InventoryManager.ShipFruit(Fruit[] items, string address);
Controller: reserved for use in the interface, as in Model-View-Controller. Makes interface or flow decisions how to display or operate on fruit.
FruitController.ShowDetails(string fruitId);
Processor: used on operations that are "batched" together. Often these are long-running or done offline.
FruitProcessor.RemoveSeeds(Fruit[] lotsOfFruit);
Manipulator: provides specific operations on a single entity or a collection of them.
FruitManipulator.PeelFruit(Fruit item);
Provider: provide more generalized or global operations.
FruitProvider.GetAllTypesOfFruit();
FruitProvider.IsInSeason(string fruitName);
Exporter: Convert some fruit into a format intended for file storage or perhaps transfer.
FruitExporter.Save(string spreadsheet);
Analyzer: Provides results about an individual piece of fruit or a quantity.
FruitAnalyzer.Weigh(Fruit[] items);
Service: exposes functionality in a loosely coupled or remotely accessible kind of way.
Assembler: Creates fruit by combining different data sources.
FruitAssembler.Combine(string speciesFile, string quantitiesFile);
Factory: responsible for creating/instantiating fruit.
FruitFactory.CreateApple(); // red delicious, McIntosh, etc
Builder: Provides a way to build up fruit by individual parts/properties.
FruitBuilder.AddSeeds(5); FruitBuilder.AddStem();
These are all somewhat loose. The main goal is to stay consistent within your own codebase and avoid conflicts with the technologies you're using-- ie. don't have a lot of Controller classes that aren't controllers if you're doing ASP.NET MVC.
I usually go with Manager.
Call it whatever you are comfortable with, just make sure you use that name consistently throughout your project. The closest thing we have is a Capability or a Receiver but even then these aren't quite what you're talking about.
However.
Do you have a specific reason for separating the data from the methods? Unless you talking about a class and its factory I'd be really surprised if this separation is truly warranted.
Let's reason like following:
If the logic uses only one entity, move it to the entity itself (See rich domain model vs. anemic domain model).
So most of these classes are those which implement logic which deal with more than one entity, hence represent a collaboration.
Such class should not be named according to their responsibility. A technical term such as manager, controller, manipulator, etc. can still be use for naming convention, but the important part is first part of the name.
Example:
Entities: Product and Customer
Collaboration between the two: PurchaseService <-- what's important is Purchase, not Service
I separate the data and operations into two separate classes.
Don’t. This flies in the face of object-oriented design.
Related
I have a project with the following structure:
Project.Domain
Contains all the domain objects
Project.EntityFramework, ref Project.Domain
Contains Entity Framework UnitOfWork
Project.Services, ref Project.Domain and Project.EntityFramework
Contains a list of Service classes that perform some operations on the Domain objects
Project.Web.Mvc, ref to all the projects above
I am trying to enforce some Business rules on top of the Domain objects:
For example, you cannot edit a domain object if it's parent is disabled, or, changing the name of an object, Category for example, needs to update recursively all it's children properties (avoiding / ignoring these rules will result in creating invalid objects)
In order to enforce these rules, i need hide all the public properties setters, making them as internal or private.
In order to do this, i need to move the Project.Services and Project.EntityFramework inside the Project.Domain project.
Is this wrong?
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
its really a bad idea, once i had this opinion but honestly if you dont program to abstraction it will become a pain when the project becomes larger. (a real pain)
IRepositories help you spread the job between different team members also. in addition to that you can write many helper extensions for Irepository to encapsulate Different Jobs for example
IReopisotry<File>.Upload()
you must be able to test each layer independently and tying them together will let you only do an integration tests with alot of bugs in lower layers :))
First, I think this question is really opinion based.
According to the Big Book the domain models must be separated from the data access. Your domain has nothing to with the manner of how storing the data. It can be a simple text file or a clustered mssql servers.
This choice must be decided based on the actual project. What is the size of the application?
The other huge question is: how many concurrent user use the db and how complex your business logic will be.
So if it's a complex project or presumably frequently modified or it has educational purposes then you should keep the domain and data access separated. And should define the repository interfaces in the domain model. Use some DI component (personally I like Ninject) and you should not reference the data access component in the services.
And of course you should create the test projects also using some moq tools to test the layers separately.
Yes this is wrong, if you are following Domain Driven Design, you should not compromise your architecture for the sake of doing less work. Your data-access and domain should be kept apart. I would strongly suggest that you implement the Repository pattern as it would allow you more flexibility in the long run.
There are of course to right answer to whats the right design...I would however argue that EF is you data layer abstraction, there is no way youre going to make anything thats more powerful and flexible with repositories.To avoid code repetition you can easily write extension methods (for IQueryable<>) for common tasks.Unit testing of the domain layer is easily handled by substituting you big DB with some in-proc DB (SqlLite / Sql Server Compact).IMHO with the maturity of current ORMs like nHibernate and EF is a huge waste of money and time to implement repositories for something as simple as DB access.
Blog post with a more detailed reply; http://ayende.com/blog/4784/architecting-in-the-pit-of-doom-the-evils-of-the-repository-abstraction-layer
I want to create an interface or base class (not sure I want to go this route) for all my business entities. For each business entity I need the following:
Id - primary key of the entity
Type - type of the entity, e.g. User, just a string
Name - name of the entity, e.g. John Doe
Description - short description of the entity, e.g. Senior Programmer
CreatedDate - date the entity was created
ModifiedDate - date the entity was modified
All classes support a single primary key.
Most of my classes have these fields, though in most cases, the primary key would be something like UserId.
One of the reasons I want to create some commonality in my business entities is I want implement a search function that returns a list of IEntity (or Entity class, if leveraging inheritance) objects.
My questions are ...
Is is the more correct way to leverage an interface as opposed to a base class?
If I do create this as an interface should I keep the property simples, e.g. Id and Name ... which would minimize me having to code each property implementation OR is it better to append "Entity" to each proper name so it's easier to work with the business entity, e.g. MyEntity.EntityId verses MyEntity.EntityId
I realize this could be considered subjective, but I really need to get some guidance on this, so any ideas to make this not be so subjective would be much appreciated.
Thanks in advance!
In my opinion...
If your classes are going to have some common implementation of some of their methods, then a base class makes more sense. Because you can't implement inside an interface, and if you were to implement an interface, you'd have the same common implementation in multiple classes, instead of a single base class.
I think appending "Entity" to each property is pointless. You already imply that it's an entity property by either the name of the entity object or its underlying type. I say avoid redundancy and keep it simple.
In my opinion, if you want many objects to have this functionality, you should avoid base-class inheritance at all costs. Once you decide that you're gonna inherit all of the classes in your project from a certain base-class, it's hard to go back. Remember, C# only lets you have single-inheritance.
A better solution might be to implement an interface which lets classes specify the properties they have to anyone who might be interested in those data.
Another reason to avoid base-classing is that it's going to be harder to unit-test, if your'e interested in that. It's also going to be hard to change custom behaviors without affecting many areas of your application.
In short, what you can do is have objects which you have clearly recognized as needing that interface implement that interface, and have another manager-type class ask for that information from those other classes, and be the adapter or gateway between your modular, single-purpose objects, and a database (or something like that).
Hope I've made myself clear enough.
Consider whether it would be better to keep the business data as isolated classes in your data access layer, and provide a common wrapper in your presentation layer that provides the common feature set you're thinking about. Maybe your solution isn't complicated enough to warrant a fully-tiered architecture - which I'm sure quite a few people would disagree with - but I feel that making your application tiered is a good approach. This means that the data access classes get to be seperate, avoiding the conundrum altogether at this tier, and the presentation class(es) only expose the functionality you actually need - but take on whatever inheritance regime you choose. My reasoning is that considering the problem in this way might make it easier to decide.
I've been reading too much probably and am suffering from some information overload. So I would appreciate some explicit guidance.
From what I've gathered, I can use VS2010's T4 template thingy to generate POCO classes that aren't tied directly to the EF. I would place these in their own project while my DAL would have an ObjectContext-derived class, right?
Once I have these classes, is it acceptable practice to use them in the UI layer? That is, say one of the generated classes is BookInfo that holds stuff about books for a public library (Title, edition, pages, summary etc.).
My BLL would contain a class BooksBLL for example like so:
public class BooksBLL
{
ObjectContext _context;
public void AddBook(BookInfo book) { ... }
public void DeleteBook(int bookID) { ... }
public void UpdateBook(int bookID, BookInfo newBook) { ... }
//Advanced search taking possibly all fields into consideration
public List<BookInfo> ResolveSearch(Func<BookInfo, bool> filter) { ... }
//etc...
}
So, my ViewModels in my MVVM UI app will be communicating with the above BLL class and exchanging BookInfo instances. Is that okay?
Furthermore, MVVM posts on the Web suggest implementing IDataErrorInfo for validation purposes. Is it okay if I implement said interface on the generated POCO class? I see from samples that those generated POCO classes contain all virtual properties and stuf and I hope adding my own logic would be okay?
If it makes any difference, at present, my app does not use WCF (or any networking stuff).
Also, if you see something terribly wrong with the way I'm trying to build my BLL, please feel free to offer help in that area too.
Update (Additional info as requested):
I'm trying to create a library automation application. It is not network based at present.
I am thinking about having layers as follows:
A project consisting of generated POCO classes (BookInfo, Author, Member, Publisher, Contact etc.)
A project with the ObjectContext-derived class (DAL?)
A Business Logic Layer with classes like the one I mentioned above (BooksBLL, AuthorsBLL etc)
A WPF UI layer using the MVVM pattern. (Hence my sub-question about IDataErrorInfo implementation).
So I'm wondering about stuff like using an instance of BooksBLL in a ViewModel class, calling ResolveSearch() on it to obtain a List<BookInfo> and presenting it... that is, using the POCO classes everywhere.
Or should I have additional classes that mirror the POCO classes exposed from my BLL?
If any more detail is needed, please ask.
What you're doing is basically the Repository pattern, for which Entity Framework and POCO are a great fit.
So, my ViewModels in my MVVM UI app will be communicating with the above BLL class and exchanging BookInfo instances. Is that okay?
That's exactly what POCO objects are for; there's no difference between the classes that are generated and how you would write them by hand. It's your ObjectContext that encapsulates all the logic around persisting any changes back to the database, and that's not directly exposed to your UI.
I'm not personally familiar with IDataErrorInfo but if right now your entities will only be used in this single app, I don't see any reason not to put it directly in the generated classes. Adding it to the T4 template would be ideal if that's possible, it would save you having to code it by hand for every class if the error messages follow any logical pattern.
Also, if you see something terribly wrong with the way I'm trying to build my BLL, please feel free to offer help in that area too.
This isn't terribly wrong by any means, but if you plan to write unit tests against your BLL (which I would recommend), you will want to change your ObjectContext member to IObjectContext. That way you can substitute any class implementing the IObjectContext interface at runtime (such as your actual ObjectContext), which will allow you to do testing against an in-memory (i.e. mocked) context and not have to hit the database.
Similarly, think about replacing your List<BookInfo> with an interface of some kind such as IList<BookInfo> or IBindingList<BookInfo> or the lowest common denominator IEnumerable<BookInfo>. That way you're not tied directly to the specific class List<T> and if your needs change over time, which tends to happen, it will reduce the refactoring necessary to replace your List<BookInfo> with something else, assuming whatever you're replacing it with implements the interface you've chosen.
You don't need to do anything in particular... as Mark said, there is no "right" answer. However, if your application is simple enough that you would simply be duplicating your classes (e.g. BookInfoUI & BookInfoBLL), then I'd recommend just using the business classes. The extra layer wouldn't serve a purpose, and so it shouldn't exist. Eric Evans in DDD even recommends putting all your logic in the UI layer if you app is simple and has very little business logic.
To make the distinction, the application layer should have classes that model what happens within the application, and the domain layer should have classes that model what happens in the domain. For example, if you have a search page, your UI layer might retrieve a list of BookSearchResult objects from a BookSearchService in the application layer, which would use the domain to pull a list of BookInfo.
Answers to your questions may depend on the size and complexity of your application. So I am afraid there will be valid arguments to answer your questions with Yes and No as well.
Personally I will answer your two main questions both with Yes:
Is it acceptable practice to use POCO (Domain) classes in the UI layer?
I guess with "UI layer" you don't actually mean the View part of the MVVM pattern but the ViewModels. (Most MVVM specialists would argue against letting a View directly reference the Model at all, I believe.)
It is not unusual to wrap a POCO from your Domain project as a property into a ViewModel and to bind this wrapped POCO directly to the View. The big Pro is: It's easy. You don't need additional ViewModel classes or replicated properties in a ViewModel and then copy those properties between the objects.
However, if you are using WPF you must take into account that the binding engine will directly write into your POCO properties if you bind them to a View. This might not always be what you want, especially if you are working with attached and change-tracked entities in a WPF form. You have to think about cancellation scenarios or how you restore properties after a cancellation which have been changed by the binding engine.
In my current project I am working with detached entities: I load the POCO from the data layer, detach it from context, dispose the context and then work with that copy in the ViewModel and bind it to the View. Updating in the data layer happens by creating a new context, loading the original entity from the DB by ID and then updating the properties from the changed POCO which was bound to the View. So the problem of unwished changes of an attached entity disappears with this approach. But there are also downsides to work with detached entites (updating is more complex for instance).
Is it okay if I implement the IDataErrorInfo interface on the generated POCO class?
If you bind your POCO entities to a View (through a wrapping ViewModel) it is not only OK but you even must implement IDataErrorInfo on the POCO class if you want to leverage the built-in property validation of the WPF binding engine. Although this interface is mainly used together with UI technologies it is part of System.ComponentModel namespace and therefore not directly tied to any UI namespaces. Basically IDataErrorInfo is only a simple contract which supports reporting of the object's state which also might be useful outside of a UI context.
The same is true for the INotifyPropertyChanged interface which you also would need to implement on your POCO classes if you bind them directly to a View.
I often see opinions which would disagree with me for several architectural reasons. But none of those opinions argue that another approach is easier. If you strictly would want to avoid to have POCO model classes in your ViewModel layer, you need to add another mapping layer with additional complexity and programming and maintenance effort. So I would vote: Keep it simple as long as you do not have a convincing reason and clear benefit to make your architecture more complex.
Something on my mind about structuring a system at a high level.
Let's say you have a system with the following layers:
UI
Service Layer
Domain Model
Data Access
The service layer is used to populate a graph of objects in the domain model. In an attempt to avoid coupling, the domain model will be not be persistence aware and will not have any dependencies on any data access layer.
However, using this approach how would one object in the domain model be able to call other objects without being able to load them with persistence, thus coupling everything together - which I'd be trying to avoid.
e.g. an Order Object would need to check an Inventory object and would obviously need to tell the Inventory object to load in some way, or populate it somehow.
Any thoughts?
You could inject any dependencies from the service layer, including populated object graphs.
I would also add that a repository can be a dependency - if you have declared an interface for the repository, you can code to it without adding any coupling.
One way of doing this is to have a mapping layer between the Data Layer and the domain model.
Have a look at the mapping, repository and facade patterns.
The basic idea is that on one side you have data access objects and on the other you have domain objects.
To decouple you have to: "Program to an 'interface', not an 'implementation'." (Gang of Four 1995:18)
Here are some links on the subject:
Gamma interview on patterns
Random blog article
Googling for "Program to an interface, not an implementation" will yield many useful resources.
Have the domain model layer define interfaces for the methods you'll need to call, and POCOs for the objects that need to be returned by those methods. The data layer can then implement those interfaces by pulling data out of your data store and mapping it into the domain model POCOs.
Any domain-level class that requires a particular data-access service can just depend on the interface via constructor arguments. Then you can leverage a dependency-injection framework to build the dependency graph and provide the correct implementations of your interfaces wherever they are required.
Before writing tons of code in order to separate everything you might want to ask yourself a few questions:
Is the Domain Model truly separate from the DAL? And yes, I'm serious and you should think about this because it is exceedingly rare for an RDBMS to actually be swapped out in favor of a different one for an existing project. Quite frankly it is much more common for the language the app was written in to be replaced than the database itself.
What exactly is this separation buying you? And, just as important, what are you losing? Separation of Concerns (SoC) is a nice term that is thrown about quite a bit. However, most people rarely understand why they are Concerned with the Separation to begin with.
I bring these up because more often than not applications can benefit from a tighter coupling to the underlying data model. Never mind that most ORM's almost enforce a tight coupling due to the nature of code generation. I've seen lot's of supposedly SoC projects come to a crash during testing because someone added a field to a table and the DAL wasn't regenerated... This kind of defeats the purpose, IMHO...
Another factor is where should the business logic live? No doubt there are strong arguments in favor of putting large swaths of BL in the actual database itself. At the same time there are cases where the BL needs to live in or very near your domain classes. With BL spread in such a way, can you truly separate these two items anyway? Even those who hate the idea of putting BL in a database will fall back on using identity keys and letting the DB enforce referential integrity, which is also business logic..
Without knowing more, I would suggest you consider flattening the Data Access and Domain Model layers. You could move to a "provider" or "factory" type architecture in which the service layer itself doesn't care about the underlying access, but the factory handles it all. Just some radical food for thought.
You should take a look at Martin Fowler's Repository and UnitOfWork patterns to use interfaces in your system
Until now I have seen that application can be well layered into three layers: Presentation-->Logic-->Data--and Entities (or Bussines Object). In the Logic Layer case you can use some pattern such as Transaction Script or Domain Model I'm supposing you're using this last. The domain model can use a Data Mapper for interacting with the data layer and create business objects, but you can also use a Table Module pattern.
All this patterns are described in Marttin's Fowler Patterns of Enterprise Application Architecture book. Personally I use Transaction Script because it is simplest than Domanin Model.
One solution is to make your Data Access layer subclass your domain entities (using Castle DynamicProxy, for example) and inject itself into the derived instances that it returns.
That way, your domain entity classes remain persistence-ignorant while the instances your applications use can still hit databases to lazy-load secondary data.
Having said that, this approach typically requires you to make a few concessions to your ORM's architecture, like marking certain methods virtual, adding otherwise unnecessary default constructors, etc..
Moreover, it's often unnecessary - especially for line-of-business applications that don't have onerous performance requirements, you can consider eagerly loading all the relevant data: just bring the inventory items up with the order.
I felt this was different enough from my previous answer, so here's a new one.
Another approach is to leverage the concept of Inversion of Control (IoC). Build an Interface that your Data Access layer implements. Each of the DAL methods should take a list of parameters and return a Data Table.
The service layer would instantiate the DAL through the interface and pass that reference to your Domain Model. The domain model would then make it's own calls into the DAL, using the interface methods, and decide when it needs to load child objects or whatever.
Something like:
interface IDBModel {
DataTable LoadUser(Int32 userId);
}
class MyDbModel : IDBModel {
DataTable LoadUser(Int32 userId) {
// make the appropriate DB calls here, return a data table
}
}
class User {
public User(IDBModel dbModel, Int32 userId) {
DataTable data = dbModel.LoadUser(userId);
// assign properties.. load any additional data as necessary
}
// You can do cool things like call User.Save()
// and have the object validate and save itself to the passed in
// datamodel. Makes for simpler coding.
}
class MyServiceLayer {
public User GetUser(Int32 userId) {
IDBModel model = new MyDbModel();
return new User(model, userId);
}
}
With this mechanism, you can actually swap out your db models on demand. For example, if you decide to support multiple databases then you can have code that is specific to a particular database vendors way of doing things and just have the service layer pick which one to use.
The domain objects themselves are responsible for loading their own data and you can keep any necessary business logic within the domain model. Another point is that the Domain Model doesn't have a direct dependency on the data layer, which preserves your mocking ability for independent testing of business logic.
Further, the DAL has no knowledge of the domain objects, so you can swap those out as necessary or even just test the DAL independently.
I'm trying to determine how best to architect a .NET Entity Framework project to achieve a nice layered approach. So far I've tried it out in a browse-based game where the players own and operate planets. Here's how I've got it:
Web Site
This contains all the front end.
C# Project - MLS.Game.Data
This contains the EDMX file with all my data mappings. Not much else here.
C# Project - MLS.Game.Business
This contains various classes that I call 'Managers' such as PlanetManager.cs. The planet manager has various static methods that are used to interact with the planet, such as getPlanet(int planetID) which would return an generated code object from MLS.Game.Data.
From the website, I'll do something like this:
var planet = PlanetManager.getPlanet(1);
It returns a Planet object from from the MLS.Game.Data (generated from the EDMX). It works, but it bothers me to a degree because it means that my front end has to reference MLS.Game.Data. I've always felt that the GUI should only need to reference the Business project though.
In addition, I've found that my Manager classes tend to get very heavy. I'll end up with dozens of static methods in them.
So... my question is - how does everyone else lay out their ASP EF projects?
EDIT
After some more though, there's additional items which bother me. For example, let's say I have my Planet object, which again is generated code from the wizard. What if a time came that my Planet needed to have a specialized property, say "Population" which is a calculation of some sort based on other properties of the Planet object. Would I want to create a new class that inherits from Planet and then return that instead? (hmm, I wonder if those classes are sealed by the EF?)
Thanks
You could try the following to improve things:
Use EF to fetch DTOs in your Data layer, then use these DTOs to populate richer business objects in your Business layer. Your UI would only then need to reference the Business layer.
Once you have created the rich business objects, you can begin to internalise some of the logic from the manager classes, effectively cleaning up the business layer.
I personally prefer the richer model over the manager model because, as you say, you end up with a load of static methods, which you inevitibly end up chaining together inside other static methods. I find this both too messy and, more importantly, harder to understand and guarantee the consistency of your objects at any given point in time.
If you encapsulate the logic within the class itself you can be more certain of the state of your object regardless of the nature of the external caller.
A good question by the way.
IMHO, your current layout is fine. It's perfectly normal for your UI to reference the 'Data' layer as you are calling it. I think that perhaps your concern is arising due to the terminology. The 'Data' you have described more often referred to as a 'business objects' (BOL) layer. A common layout would then be to have a business logic layer (BLL) which is your 'Managers' layer and a data access layer (DAL). In your scenario, LINQ to Entites (presuming you will use that) is your DAL. A normal reference pattern would then be:-
UI references BLL and BOL.
BLL refences BOL and DAL (LINQ to Entites).
Have a look at this series of articles for more detail.
As for your second question (after the EDIT) if you need or want to add features to your EF objects you can use partial classes. Right click the EDMX file and select view code.
Or if this isn't enough for you can ditch the designer and write your own EF enabled classes.
There is a (brief) discussion of both options here -
http://msdn.microsoft.com/en-us/library/bb738612.aspx
As for your second question in the "EDIT" section:
If I'm not mistaken, the classes generated by EF are not sealed, and they are PARTIAL classes, so you could easily extend those without touching the generated files themselves.
The generated class will be:
public partial class Planet : global::System.Data.Objects.DataClasses.EntityObject
{
...
}
so you can easily create your own "PlanetAddons.cs" (or whatever you want to call it) to extend this class:
public partial class Planet
{
property int Population {get; set;}
...
}
Pretty neat, eh? No need to derive and create artificial object hierarchies....
Marc
I'm no expert, but that sounds pretty good to me. That similar to what I have in my solutions, except I just merge the EF project with the business project. My solutions aren't that big, and my objects don't require a lot of intelligence, so its fine for me. I too have a ton of different methods for each of my static helper classes.
If you don't want the presentation layer knowing about the data access layer, then you have to create some intermediary classes, which would probably be a lot of work. So whats the problem with your current setup?
Your layout looks ok.
I would have added a Utility/Common layer
Web UI
Business Layer
Dataobjects
Utilities layer
I would add DTOs to your Business layer that are "dumb object" representations (i.e. only properties) of the Entities in your data layer. Then your "Manager" classes can return them, such as:
class PlanetManager
{
public static PlanetDTO GetPlanet(int id) { // ... }
}
and your UI can only deal with the BLL layer via POCOs; the Manager (what I would call a "Mapper" class) handles all the translating between the objects and the data layer. Also if you need to extend the class, you can have a "virtual" property on the DTO object and have the Manager translate that back to its components.