What does really POCO mean, in respect to dependencies?
With NHibernate; child collections are retrieved as NHibernate.Collection.Generic.PersistentGenericBag<>. This is what I here mean by "dependencies"; If I try to save/update an object graph, the DAL will already have it's "opinion" about what & how I'm trying to persist it.
Initially, I thought that requesting a POCO would carry no depencencies to the DAL, repository, ORM (unsure what is correct term in this perspective). But now I'm confused, as I'm thinking maybe it just means that the POCO class has no persistence methods; And that retrieving a POCO object graph may still carry such dependencies?
So when you talk about POCO, what do you really mean? Can a POCO have these type of dependencies, and if it may AND may not, how do you "by name" distinguish those?
A POCO that "has no such dependencies" seems more like a DTO, in some respect, but can have behavior, so it's not a DTO after all.
Also, just to be 100% sure: I assume a DTO would be persistent ignorant AND have "no dependencies" ?
Maybe "dependencies" is not the proper word to use, so in case correct me. I hope my question is still comprehensible.
EDIT1:
With some further thinking; Maybe my assumption that the ...PersistentGenericBag brought with it some "dependencies" is wrong (?) Probably it's just a type, and nothing more magical. And further; that the only dependencies the objects have to NH, are via the ISessions, which of course, we have control over. Does that make sense?
POCO are classes that does not have any dependencies on frameworks or any other infrastructure class. Well, NHibernate DOES use the PersistentGenericBag but your POCO will only reference an IList class.
For your POCO, it does't matter if this instance will be a List, a ReadOnlyList or a PersistentGenericBag, he will treat it as an IList but will have other behaviour that is not up to him deal with.
By the way, if you're mapping your Domain Objets with annotations you know have a clearly dependency to the ORM.
Having no dependencies on your objects with regards to your 'DAL' is quite an utopia.
However, the way NHibernate has solved it, comes quite close IMHO.
IMHO, the term POCO means that your entities (domain objects) should not have to inherit from a certain base class, or implement some interface in order for your DAL to work.
This is the case with NHibernate. However, indeed NHibernate requires some extra classes for collections (like the Iese.Set class), but this is mostly because the .NET framework didn't have a 'Set' class at that time.
NHibernate uses its own collection classes, but in most of the cases, you -the developer- are not troubled with that.
When following the Domain Driven Design principles, your entities can be POCO's, however your entities are certainly not just DTO's. An entity should be a representation of how that entity looks like in the real world, with data and behaviour.
A DTO should indeed be persistant ignorant, since it is an object that can be used to transfer data between layers. One of the 2 layers should not necessarly be your DAL. You can use a DTO for instance to transfer data from your business layer to your view-layer.
Related
I have my POCO objects in a separate class, with each one implementing interfaces etc, which i believe perfectly fine to do with a POCO object.
I read all over the place that it is bad practice to add Entity Framework attributes to your POCO objects, so instead i use the fluent API. Again i believe this to be correct.
I gather that if you need to apply a custom attribute against a POCO object, then this is not bad practice is it? I guess its just like adding the object to an interface. Or have i got it all completely wrong?
Using code first approach
POCO objects have not dependencies on external libraries. These dependencies could come with the necessity to use external attributes or to derive from external class/interface.
So, nothing wrong with attributes in themselves, it's more about keeping your models clear from external dependencies. If you depend on external libraries in your models, you have to pull them with your models every time: e.g. when you decide to move your models in separate project, or when you decide to write Unit-tests that touch your models etc.
I agree with Ben Robinson that it is a design choice, but if you have chance to use clear models I think you should use it(with EF you have that chance). And if you were talking about your own custom attributes, then it's OK whatsoever.
I want to create an interface or base class (not sure I want to go this route) for all my business entities. For each business entity I need the following:
Id - primary key of the entity
Type - type of the entity, e.g. User, just a string
Name - name of the entity, e.g. John Doe
Description - short description of the entity, e.g. Senior Programmer
CreatedDate - date the entity was created
ModifiedDate - date the entity was modified
All classes support a single primary key.
Most of my classes have these fields, though in most cases, the primary key would be something like UserId.
One of the reasons I want to create some commonality in my business entities is I want implement a search function that returns a list of IEntity (or Entity class, if leveraging inheritance) objects.
My questions are ...
Is is the more correct way to leverage an interface as opposed to a base class?
If I do create this as an interface should I keep the property simples, e.g. Id and Name ... which would minimize me having to code each property implementation OR is it better to append "Entity" to each proper name so it's easier to work with the business entity, e.g. MyEntity.EntityId verses MyEntity.EntityId
I realize this could be considered subjective, but I really need to get some guidance on this, so any ideas to make this not be so subjective would be much appreciated.
Thanks in advance!
In my opinion...
If your classes are going to have some common implementation of some of their methods, then a base class makes more sense. Because you can't implement inside an interface, and if you were to implement an interface, you'd have the same common implementation in multiple classes, instead of a single base class.
I think appending "Entity" to each property is pointless. You already imply that it's an entity property by either the name of the entity object or its underlying type. I say avoid redundancy and keep it simple.
In my opinion, if you want many objects to have this functionality, you should avoid base-class inheritance at all costs. Once you decide that you're gonna inherit all of the classes in your project from a certain base-class, it's hard to go back. Remember, C# only lets you have single-inheritance.
A better solution might be to implement an interface which lets classes specify the properties they have to anyone who might be interested in those data.
Another reason to avoid base-classing is that it's going to be harder to unit-test, if your'e interested in that. It's also going to be hard to change custom behaviors without affecting many areas of your application.
In short, what you can do is have objects which you have clearly recognized as needing that interface implement that interface, and have another manager-type class ask for that information from those other classes, and be the adapter or gateway between your modular, single-purpose objects, and a database (or something like that).
Hope I've made myself clear enough.
Consider whether it would be better to keep the business data as isolated classes in your data access layer, and provide a common wrapper in your presentation layer that provides the common feature set you're thinking about. Maybe your solution isn't complicated enough to warrant a fully-tiered architecture - which I'm sure quite a few people would disagree with - but I feel that making your application tiered is a good approach. This means that the data access classes get to be seperate, avoiding the conundrum altogether at this tier, and the presentation class(es) only expose the functionality you actually need - but take on whatever inheritance regime you choose. My reasoning is that considering the problem in this way might make it easier to decide.
I'm implementing a DAL using entity framework. On our application, we have three layers (DAL, business layer and presentation). This is a web app. When we began implementing the DAL, our team thought that DAL should have classes whose methods receive a ObjectContext given by services on the business layer and operate over it. The rationale behind this decision is that different ObjectContexts see diferent DB states, so some operations can be rejected due to problems with foreign keys match and other inconsistencies.
We noticed that generating and propagating an object context from the services layer generates high coupling between layers. Therefore we decided to use DTOs mapped by Automapper (not unmanaged entities or self-tracking entities arguing high coupling, exposing entities to upper layers and low efficiency) and UnitOfWork. So, here are my questions:
Is this the correct approach to design a web application's DAL? Why?
If you answered "yes" to 1., how is this to be reconciled the concept of DTO with the UnitOfWork patterns?
If you answered "no" to 1., which could be a correct approach to design a DAL for a Web application?
Please, if possible give bibliography supporting your answer.
About the current design:
The application has been planned to be developed on three layers: Presentation, business and DAL. Business layer has both facades and services
There is an interface called ITransaction (with only two methods to dispose and save changes) only visible at services. To manage a transaction, there is a class Transaction extending a ObjectContext and ITransaction. We've designed this having in mind that at business layer we do not want other ObjectContext methods to be accessible.
On the DAL, we created an abstract repository using two generic types (one for the entity and the other for its associated DTO). This repository has CRUD methods implemented in a generic way and two generic methods to map the DTOs and entities of the generic repository with AutoMapper. The abstract repository constructor takes an ITransaction as argument and it expects the ITransaction to be an ObjectContext in order to assign it to its proctected ObjectContext property.
The concrete repositories should only receive and return .net types and DTOs.
We now are facing this problem: the generic method to create does not generate a temporal or a persistent id for the attached entities (until we use SaveChanges(), therefore breaking the transactionality we want); this implies that service methods cannot use it to associate DTOs in the BL)
There are a number of things going on here...The assumption I'll make is that you're using a 3-Tier architecture. That said, I'm unclear on a few design decisions you've made and what the motivations were behind making them. In general, I would say that your ObjectContext should not be passed around in your classes. There should be some sort of manager or repository class which handles the connection management. This solves your DB state management issue. I find that a Repository pattern works really well here. From there, you should be able to implement the unit of work pattern fairly easily since your connection management will be handled in one place. Given what I know about your architecture, I would say that you should be using a POCO strategy. Using POCOs does not tightly couple you to any ORM provider. The advantage is that your POCOs will be able to interact with your ObjectContext (probably via Repository of some sort) and this will give you visibility into change tracking. Again, from there you will be able to implement the Unit of Work (transaction) pattern to give you full control over how your business transaction should behave. I find this is an incredibly useful article for explaining how all this fits together. The code is buggy but accurately illustrates best practices for the type of architecture you're describing: Repository, Specification and Unit of Work Implementation
The short version of my answer to question number 1 is "no". The above link provides what I believe to be a better approach for you.
I always believed that code can explain things better than worlds for programmers. And this is especially true for this topic. Thats why I suggest you to look at the great sample application in witch all consepts you expecting are implemented.
Project is called Sharp Architecture, it is centered around MVC and NHibernate, but you can use the same approaches just replacing NHibernate parts with EF ones when you need them. The purpose of this project is to provide an application template with all community best practices for building web applications.
It covers all common and most of the uncommon topics when using ORM's, managing transactions, managing dependencies with IoC containers, use of DTOs, etc.
And here is a sample application.
I insist on reading and trying this, it will be a real trasure for you like it was for me.
You should take a look what dependency injection and inversion of control in general means. That would provide ability to control life cycle of ObjectContext "from outside". You could ensure that only 1 instance of object context is used for every http request. To avoid managing dependencies manually, I would recommend using StructureMap as a container.
Another useful (but quite tricky and hard to do it right) technique is abstraction of persistence. Instead of using ObjectContext directly, You would use so called Repository which is responsible to provide collection like API for Your data store. This provides useful seam which You can use to switch underlying data storing mechanism or to mock out persistence completely for tests.
As Jason suggested already - You should also use POCO`s (plain old clr objects). Despite that there would still be implicit coupling with entity framework You should be aware of, it's much better than using generated classes.
Things You might not find elsewhere fast enough:
Try to avoid usage of unit of work. Your model should define transactional boundaries.
Try to avoid usage of generic repositories (do note point about IQueryable too).
It's not mandatory to spam Your code with repository pattern name.
Also, You might enjoy reading about domain driven design. It helps to deal with complex business logic and gives great guidelines to makes code less procedural, more object oriented.
I'll focus on your current issues: To be honest, I don't think you should be passing around your ObjectContext. I think that is going to lead to problems. I'm assuming that a controller or a business service will be passing the ObjectContext/ITransaction to the Repository. How will you ensure that your ObjectContext is disposed of properly down stream? What happens when you use nested transactions? What manages the rollbacks, for transactions down stream?
I think your best bet lies in putting some more definition around how you expect to manage transactions in your architecture. Using TransactionScope in your controller/service is a good start since the ObjectContext respects it. Of course you may need to take into account that controllers/services may make calls to other controllers/services which have transactions in them. In order to allow for scenarios where you want full control over your business transactions and the subsequent database calls, you'll need to create some sort of TransactionManager class which enlists, and generally manages transactions up and down your stack. I've found that NCommon does an extraordinary job at both abstracting and managing transactions. Take a look at UnitOfWorkScope and TransactionManager classes in there. Although I disagree with NCommon's approach of forcing the Repository to rely on the UnitOfWork, that could easily be refactored out if you wanted.
As far as your persistantID issue goes, check this out
One advantage that comes to my mind is, if you use Poco classes for Orm mapping, you can easily switch from one ORM to another, if both support Poco.
Having an ORM with no Poco support, e.g. mappings are done with attributes like the DataObjects.Net Orm, is not an issue for me, as also with Poco-supported Orms and theirs generated proxy entities, you have to be aware that entities are actually DAO objects bound to some context/session, e.g. serializing is a problem, etc..
POCO it's all about loose coupling and testability.
So when you are doing POCO you can test your Domain Model (if your're doing DDD for example) in isolation. You don't have to bother about how it is persisted. You don't need to stub contexts/sessions to test your domain.
Another advantage is that there is less leaky abstractions. Because persistance concerns are not pushed to domain layer. So you are enforcing the SRP principle.
The third advantage I can see is that doing POCO your Domain Model is more evolutive and flexible. You can add new features easier than if it was coupled to the persistance.
I use POCO when I'm doing DDD for example, but for some kind of application you don't need to do DDD (if you're doing small data based applications) so the concerns are not the same.
Hope this helps
None. Point. All advantages people like throwing around are advantages that are not important in the big scale of the picture. I rather prefer a strong base class for entity objects that actually holds a lot of integrated code (like throwing property change events when properties change) than writing all that stuff myself. Note that I DID write a (at that time commercially available) ORM for .NET before "LINQ" or "ObjectSpaces" even were existing. I've used O/R mappers like for 15 years now, and never found a case where POCO was really something that was worth the possible trouble.
That said, attributes MAY be bad for other reasons. I rather prefer the Fluent NHibernate approach these days - having started my own (now retired) mapper with attributes, then moved to XML based files.
The "POCO gets me nothing" theme mostly comes from the point that Entities ARE SIMPLY NOT NORMAL OBJECTS. They have a lot of additional functionality as well as limitations (like query speed etc.) that the user should please be aware of anyway. ORM's, despite LINQ, are not replacable anyway - noit if you start using their really interesting higher features. So, at the end you get POCO and still are suck with a base class and different semantics left and right.
I find that most proponents of POCO (as in: "must have", not "would be nice") normally have NOT thought their arguments to the real end. You get all kinds of pretty crappy thoughts, pretty much on the level of "stored procedures are faster than dynamic SQL" - stuff that simply does not hold true. Things like:
"I want to have them in cases where they do not need saving ot the database" (use a separate object pool, never commit),
"I may want to have my own functionality in a base class (the ORM should allos abstract entity classed without functionality, so put your OWN base class below the one of the ORM)
"I may want to replace the ORM with another one" (so never use any higher functionality, hope the ORM API is compatible and then you STILL may have to rewrite large parts).
In general POCO people also overlook the hugh amount of work that acutally is to make it RIGHT - with stuff like transactional object updates etc. there is a TON of code in the base class. Some of the .NET interfaces are horrific to implement on a POCO level, though a lot easier if you can tie into the ORM.
Take the post of Thomas Jaskula here:
POCO it's all about loose coupling and
testability.
That assumes you can test databinding without having it? Testability is mock framework stuff, and there are REALLY Powerful ones that can even "redirect" method calls.
So when you are doing POCO you can
test your Domain Model (if you're
doing DDD for example) in isolation.
You don't have to bother about how it
is persisted. You don't need to stub
contexts/sessions to test your domain.
Actually not true. Persistence should be part of any domain model test, as the domain model is there to be persisted. You can always test non-persistent scenarios by just not committing the changes, but a lot of the tests will involve persistence and the failure of that (i.e. invoices with invalid / missing data re not valid to be written to disc, for example).
Another advantage is that there is
less leaky abstractions. Because
persistance concerns are not pushed to
domain layer. So you are enforcing the
SRP principle.
Actually no. A proper Domain model will never have persistence methods in the entities. This is a crap ORM to start with (user.Save ()). OTOH the base class will to things like validation (IDataErrorInfo), handle property update events on persistent filed and in general save you a ton of time.
As I said before, some of the functionality you SHOULD have is really hard to implement with variables as data store - like the ability to put an entity into an update mode, do some changes, then roll them back. Not needed - tell that Microsoft who use that if available in their data grids (you can change some properties, then hit escape to roll back changes).
The third advantage I can see is that
doing POCO your Domain Model is more
evolutive and flexible. You can add
new features easier than if it was
coupled to the persistance.
Non-argument. You can not play around adding fields to a peristet class without handling the persistence, and you can add non-persistent features (methods) to a non-poco class the same as to a poco class.
In general, my non-POCO base class did the following:
Handle property updates and IDataErrorInfo - without the user writing a line of code for fields and items the ORM could handle.
Handle object status information (New, Updated etc.). This is IMHO intrinsic information that also is pretty often pushed down to the user interface. Note that this is not a "save" method, but simply an EntityStatus property.
And it contained a number of overridable methods that the entity could use to extend the behavior WITHOUT implementing a (public) interface - so the methods were really private to the entity. It also had some more internal properties like to get access to the "object manager" responsible for the entity, which also was the point to ask for other entities (submit queries), which sometimes was needed.
POCO support in an ORM is all about separation of concerns, following the Single Responsibility Principle. With POCO support, an ORM can talk directly to a domain model without the need to "muddy" the domain with data-access specific code. This ensures the domain model is designed to solve only domain-related problems and not data-access problems.
Aside from this, POCO support can make it easier to test the behaviour of objects in isolation, without the need for a database, mapping information, or even references to the ORM assemblies. The ability to have "stand-alone" objects can make development significantly easier, because the objects are simple to instantiate and easy to predict.
Additionally, because POCO objects are not tied to a data-source, you can treat them the same, regardless of whether they have been loaded from your primary database, an alternative database, a flat file, or any other process. Although this may not seem immediately beneficial, treating your objects the same regardless of source can make behaviour easy to predict and to work with.
I chose NHibernate for my most recent ORM because of the support for POCO objects, something it handles very well. It suits the Domain-Driven Design approach the project follows and has enabled great separation between the database and the domain.
Being able to switch ORM tools is not a real argument for POCO support. Although your classes may not have any direct dependencies on the ORM, their behaviour and shape will be restricted by the ORM tool and the database it is mapping to. Changing your ORM is as significant a change as changing your database provider. There will always be features in one ORM that are not available in another and your domain classes will reflect the availability or absence of features.
In NHibernate, you are required to mark all public or protected class members as virtual to enable support for lazy-loading. This restriction, though not significantly changing my domain layer, has had an impact on its design.
Good people of SO,
Today I have some serious concerns on my business layer design.
It is based on Entity POCO objects and
I want to add logic to these entities BUT, there are 2 types of logic:
Pure C# logic
Persistence logic (LinqToEntities in my case)
My question is simple:
How should I separate these two kinds ?
First, I was thinking about adding these two as methods to the entities. And using partial classes to split them.
Second, I thought that I wouldn't want an overweight object with a LOT of methods.
So maybe why not static classes or singleton with methods doing the LinqToEntities stuff, and leave the pure C# in entity methods.
Then I would have several classes grouped by fonctionnality providing the logic, the entity is passed as argument to the classes methods.
It really bothers me, because the second solution seems cleaner but it looks like it breaks the object-oriented paradigm. On the other hand the first one seems like an anti-pattern.
What do you think ? Do you have a bright solution solving this paradox ?
Schizophrenic edit: in fact what I call persistence logic should go to the DAL and the pure c# logic in the BLL. POCO entities are produced by the DAL. I can then extend these entities in my BLL to add methods. In my DAL I should structure the logic as exposed in the second solution.
The logic that describes how an entity should be saved/loaded doesn't belong to the entity itself ; it's more likely to be the role of a persistence service, a data access object, etc.
I would let the object specific logic in the object -- we're here talking about the object behavior, then create a service that handles persistence concerns for this object type.