I have my POCO objects in a separate class, with each one implementing interfaces etc, which i believe perfectly fine to do with a POCO object.
I read all over the place that it is bad practice to add Entity Framework attributes to your POCO objects, so instead i use the fluent API. Again i believe this to be correct.
I gather that if you need to apply a custom attribute against a POCO object, then this is not bad practice is it? I guess its just like adding the object to an interface. Or have i got it all completely wrong?
Using code first approach
POCO objects have not dependencies on external libraries. These dependencies could come with the necessity to use external attributes or to derive from external class/interface.
So, nothing wrong with attributes in themselves, it's more about keeping your models clear from external dependencies. If you depend on external libraries in your models, you have to pull them with your models every time: e.g. when you decide to move your models in separate project, or when you decide to write Unit-tests that touch your models etc.
I agree with Ben Robinson that it is a design choice, but if you have chance to use clear models I think you should use it(with EF you have that chance). And if you were talking about your own custom attributes, then it's OK whatsoever.
Related
I have a project with the following structure:
Project.Domain
Contains all the domain objects
Project.EntityFramework, ref Project.Domain
Contains Entity Framework UnitOfWork
Project.Services, ref Project.Domain and Project.EntityFramework
Contains a list of Service classes that perform some operations on the Domain objects
Project.Web.Mvc, ref to all the projects above
I am trying to enforce some Business rules on top of the Domain objects:
For example, you cannot edit a domain object if it's parent is disabled, or, changing the name of an object, Category for example, needs to update recursively all it's children properties (avoiding / ignoring these rules will result in creating invalid objects)
In order to enforce these rules, i need hide all the public properties setters, making them as internal or private.
In order to do this, i need to move the Project.Services and Project.EntityFramework inside the Project.Domain project.
Is this wrong?
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
its really a bad idea, once i had this opinion but honestly if you dont program to abstraction it will become a pain when the project becomes larger. (a real pain)
IRepositories help you spread the job between different team members also. in addition to that you can write many helper extensions for Irepository to encapsulate Different Jobs for example
IReopisotry<File>.Upload()
you must be able to test each layer independently and tying them together will let you only do an integration tests with alot of bugs in lower layers :))
First, I think this question is really opinion based.
According to the Big Book the domain models must be separated from the data access. Your domain has nothing to with the manner of how storing the data. It can be a simple text file or a clustered mssql servers.
This choice must be decided based on the actual project. What is the size of the application?
The other huge question is: how many concurrent user use the db and how complex your business logic will be.
So if it's a complex project or presumably frequently modified or it has educational purposes then you should keep the domain and data access separated. And should define the repository interfaces in the domain model. Use some DI component (personally I like Ninject) and you should not reference the data access component in the services.
And of course you should create the test projects also using some moq tools to test the layers separately.
Yes this is wrong, if you are following Domain Driven Design, you should not compromise your architecture for the sake of doing less work. Your data-access and domain should be kept apart. I would strongly suggest that you implement the Repository pattern as it would allow you more flexibility in the long run.
There are of course to right answer to whats the right design...I would however argue that EF is you data layer abstraction, there is no way youre going to make anything thats more powerful and flexible with repositories.To avoid code repetition you can easily write extension methods (for IQueryable<>) for common tasks.Unit testing of the domain layer is easily handled by substituting you big DB with some in-proc DB (SqlLite / Sql Server Compact).IMHO with the maturity of current ORMs like nHibernate and EF is a huge waste of money and time to implement repositories for something as simple as DB access.
Blog post with a more detailed reply; http://ayende.com/blog/4784/architecting-in-the-pit-of-doom-the-evils-of-the-repository-abstraction-layer
The problem is familiar - when marshalling user defined / domain types through service boundaries, do we simply annotate rich domain objects with [DataContract] attributes (thereby polluting the domain with ServiceModel constructs), or do we implement some sort of DTO process (creating extra work for arguably little benefit)?
How are people resolving this conflict? Are there other approaches that have fewer downsides?
If you're using the DTO approach, how do you go about implementing the transfer of property values from domain object to DTO?
Thanks
You have mostly answered your questions. If you want very clear design use DTO. If you don't want to add additional layer of complexity either mark classes with DataContract / DataMember attributes or use defalut serialization (only .NET3.5 and newer) which takes all public properties (with getter and setter) + you can remove some properties from serialization by using IgnoreDataMember attribute. To map domain objects to DTOs and DTOs to domain objects you can use AutoMapper.
If you use DTO (my suggestion), you can transfer information from DTOs to entities and vice versa using the assembler pattern. You can do it manually or you can use tools. AutoMapper is a good suggestion.
This may be obvious, but I want to add to what Ladislav has said. Like he mentions, you can use POCO types, but you also have the flexibility to go beyond and use IXmlSerializable, ISerializable, Serializable, and more; these other serialization moels do not have the flexibility of being used with IgnoreDataMember.
See this blog post for more information. It also details how DataContractSerializer would prioritize two conflicting programming models on the same type.
What does really POCO mean, in respect to dependencies?
With NHibernate; child collections are retrieved as NHibernate.Collection.Generic.PersistentGenericBag<>. This is what I here mean by "dependencies"; If I try to save/update an object graph, the DAL will already have it's "opinion" about what & how I'm trying to persist it.
Initially, I thought that requesting a POCO would carry no depencencies to the DAL, repository, ORM (unsure what is correct term in this perspective). But now I'm confused, as I'm thinking maybe it just means that the POCO class has no persistence methods; And that retrieving a POCO object graph may still carry such dependencies?
So when you talk about POCO, what do you really mean? Can a POCO have these type of dependencies, and if it may AND may not, how do you "by name" distinguish those?
A POCO that "has no such dependencies" seems more like a DTO, in some respect, but can have behavior, so it's not a DTO after all.
Also, just to be 100% sure: I assume a DTO would be persistent ignorant AND have "no dependencies" ?
Maybe "dependencies" is not the proper word to use, so in case correct me. I hope my question is still comprehensible.
EDIT1:
With some further thinking; Maybe my assumption that the ...PersistentGenericBag brought with it some "dependencies" is wrong (?) Probably it's just a type, and nothing more magical. And further; that the only dependencies the objects have to NH, are via the ISessions, which of course, we have control over. Does that make sense?
POCO are classes that does not have any dependencies on frameworks or any other infrastructure class. Well, NHibernate DOES use the PersistentGenericBag but your POCO will only reference an IList class.
For your POCO, it does't matter if this instance will be a List, a ReadOnlyList or a PersistentGenericBag, he will treat it as an IList but will have other behaviour that is not up to him deal with.
By the way, if you're mapping your Domain Objets with annotations you know have a clearly dependency to the ORM.
Having no dependencies on your objects with regards to your 'DAL' is quite an utopia.
However, the way NHibernate has solved it, comes quite close IMHO.
IMHO, the term POCO means that your entities (domain objects) should not have to inherit from a certain base class, or implement some interface in order for your DAL to work.
This is the case with NHibernate. However, indeed NHibernate requires some extra classes for collections (like the Iese.Set class), but this is mostly because the .NET framework didn't have a 'Set' class at that time.
NHibernate uses its own collection classes, but in most of the cases, you -the developer- are not troubled with that.
When following the Domain Driven Design principles, your entities can be POCO's, however your entities are certainly not just DTO's. An entity should be a representation of how that entity looks like in the real world, with data and behaviour.
A DTO should indeed be persistant ignorant, since it is an object that can be used to transfer data between layers. One of the 2 layers should not necessarly be your DAL. You can use a DTO for instance to transfer data from your business layer to your view-layer.
I'm in the process of starting a new project and creating the business objects and data access etc. I'm just using plain old clr objects rather than any orms. I've created two class libraries:
1) Business Objects - holds all my business objects, all this objects are light weight with only properties and business rules.
2) Repository - this is for all my data access.
The majority of my objects will have child list in and my question is what is the best way to lazy load these values as I don't want to bring back unnecessary information if I dont need to.
I've thought about when using the "get" on the child property to check if its "null" and if it is call my repository to get the child information. This has two problems from what I can see:
1) The object "knows" how to get itself I would rather no data access logic be held in the object.
2) This required both classes to reference each other which in visual studio throws a circular dependency error.
Does anyone have any suggestions on how to overcome this issue or any recommendations on my projects layout and where it can be improved?
Thanks
To do this requires that you program to interfaces (abstractions over implementations) and/or declare your properties virtual. Then your repository returns a proxy object for those properties that are to be loaded lazily. The class that calls the repository is none the wiser, but when it tries to access one of those properties, the proxy calls the database and loads up the values.
Frankly, I think it is madness to try to implement this oneself. There are great, time-tested solutions to this problem out there, that have been developed and refined by the greatest minds in .NET.
To do the proxying, you can use Castle DynamicProxy, or you can use NHibernate and let it handle all of the proxying and lazy loading for you (it uses DynamicProxy). You'll get better performance than out of any hand-rolled implementations, guaranteed.
NHibernate won't mess with your POCOs -- no attributes, no base classes; you only need to mark members virtual to allow proxy generation.
Simply put, I'd reconsider using an ORM, especially if you want that lazy loading; you don't have to give up your POCOs.
After looking into the answers provided and further research I found an article that uses delegates for the lazy loading. This provided a simpler solution than using proxies or implementing NHibernate.
Here's the link to the article.
If you are using Entity Framework 4.0, you will have support for POCO's with deferred loading & will allow you to write a generic repository to do data access.
There are tons of article online on generic repository pattern with EF 4.0
HTH.
You can get around the circular dependency issue if your lazy loading code loads the repository at runtime (Activator.CreateInstance or something similar) and then calls the appropriate method via reflection. Of course there are performance penalties associated with reflection, but often turn out be insignificant in most solutions.
Another way to solve this problem is to simply compile to a single dll - here you can still logically separate your layers using different namespaces, and still organise your classes by using different directories.
One advantage that comes to my mind is, if you use Poco classes for Orm mapping, you can easily switch from one ORM to another, if both support Poco.
Having an ORM with no Poco support, e.g. mappings are done with attributes like the DataObjects.Net Orm, is not an issue for me, as also with Poco-supported Orms and theirs generated proxy entities, you have to be aware that entities are actually DAO objects bound to some context/session, e.g. serializing is a problem, etc..
POCO it's all about loose coupling and testability.
So when you are doing POCO you can test your Domain Model (if your're doing DDD for example) in isolation. You don't have to bother about how it is persisted. You don't need to stub contexts/sessions to test your domain.
Another advantage is that there is less leaky abstractions. Because persistance concerns are not pushed to domain layer. So you are enforcing the SRP principle.
The third advantage I can see is that doing POCO your Domain Model is more evolutive and flexible. You can add new features easier than if it was coupled to the persistance.
I use POCO when I'm doing DDD for example, but for some kind of application you don't need to do DDD (if you're doing small data based applications) so the concerns are not the same.
Hope this helps
None. Point. All advantages people like throwing around are advantages that are not important in the big scale of the picture. I rather prefer a strong base class for entity objects that actually holds a lot of integrated code (like throwing property change events when properties change) than writing all that stuff myself. Note that I DID write a (at that time commercially available) ORM for .NET before "LINQ" or "ObjectSpaces" even were existing. I've used O/R mappers like for 15 years now, and never found a case where POCO was really something that was worth the possible trouble.
That said, attributes MAY be bad for other reasons. I rather prefer the Fluent NHibernate approach these days - having started my own (now retired) mapper with attributes, then moved to XML based files.
The "POCO gets me nothing" theme mostly comes from the point that Entities ARE SIMPLY NOT NORMAL OBJECTS. They have a lot of additional functionality as well as limitations (like query speed etc.) that the user should please be aware of anyway. ORM's, despite LINQ, are not replacable anyway - noit if you start using their really interesting higher features. So, at the end you get POCO and still are suck with a base class and different semantics left and right.
I find that most proponents of POCO (as in: "must have", not "would be nice") normally have NOT thought their arguments to the real end. You get all kinds of pretty crappy thoughts, pretty much on the level of "stored procedures are faster than dynamic SQL" - stuff that simply does not hold true. Things like:
"I want to have them in cases where they do not need saving ot the database" (use a separate object pool, never commit),
"I may want to have my own functionality in a base class (the ORM should allos abstract entity classed without functionality, so put your OWN base class below the one of the ORM)
"I may want to replace the ORM with another one" (so never use any higher functionality, hope the ORM API is compatible and then you STILL may have to rewrite large parts).
In general POCO people also overlook the hugh amount of work that acutally is to make it RIGHT - with stuff like transactional object updates etc. there is a TON of code in the base class. Some of the .NET interfaces are horrific to implement on a POCO level, though a lot easier if you can tie into the ORM.
Take the post of Thomas Jaskula here:
POCO it's all about loose coupling and
testability.
That assumes you can test databinding without having it? Testability is mock framework stuff, and there are REALLY Powerful ones that can even "redirect" method calls.
So when you are doing POCO you can
test your Domain Model (if you're
doing DDD for example) in isolation.
You don't have to bother about how it
is persisted. You don't need to stub
contexts/sessions to test your domain.
Actually not true. Persistence should be part of any domain model test, as the domain model is there to be persisted. You can always test non-persistent scenarios by just not committing the changes, but a lot of the tests will involve persistence and the failure of that (i.e. invoices with invalid / missing data re not valid to be written to disc, for example).
Another advantage is that there is
less leaky abstractions. Because
persistance concerns are not pushed to
domain layer. So you are enforcing the
SRP principle.
Actually no. A proper Domain model will never have persistence methods in the entities. This is a crap ORM to start with (user.Save ()). OTOH the base class will to things like validation (IDataErrorInfo), handle property update events on persistent filed and in general save you a ton of time.
As I said before, some of the functionality you SHOULD have is really hard to implement with variables as data store - like the ability to put an entity into an update mode, do some changes, then roll them back. Not needed - tell that Microsoft who use that if available in their data grids (you can change some properties, then hit escape to roll back changes).
The third advantage I can see is that
doing POCO your Domain Model is more
evolutive and flexible. You can add
new features easier than if it was
coupled to the persistance.
Non-argument. You can not play around adding fields to a peristet class without handling the persistence, and you can add non-persistent features (methods) to a non-poco class the same as to a poco class.
In general, my non-POCO base class did the following:
Handle property updates and IDataErrorInfo - without the user writing a line of code for fields and items the ORM could handle.
Handle object status information (New, Updated etc.). This is IMHO intrinsic information that also is pretty often pushed down to the user interface. Note that this is not a "save" method, but simply an EntityStatus property.
And it contained a number of overridable methods that the entity could use to extend the behavior WITHOUT implementing a (public) interface - so the methods were really private to the entity. It also had some more internal properties like to get access to the "object manager" responsible for the entity, which also was the point to ask for other entities (submit queries), which sometimes was needed.
POCO support in an ORM is all about separation of concerns, following the Single Responsibility Principle. With POCO support, an ORM can talk directly to a domain model without the need to "muddy" the domain with data-access specific code. This ensures the domain model is designed to solve only domain-related problems and not data-access problems.
Aside from this, POCO support can make it easier to test the behaviour of objects in isolation, without the need for a database, mapping information, or even references to the ORM assemblies. The ability to have "stand-alone" objects can make development significantly easier, because the objects are simple to instantiate and easy to predict.
Additionally, because POCO objects are not tied to a data-source, you can treat them the same, regardless of whether they have been loaded from your primary database, an alternative database, a flat file, or any other process. Although this may not seem immediately beneficial, treating your objects the same regardless of source can make behaviour easy to predict and to work with.
I chose NHibernate for my most recent ORM because of the support for POCO objects, something it handles very well. It suits the Domain-Driven Design approach the project follows and has enabled great separation between the database and the domain.
Being able to switch ORM tools is not a real argument for POCO support. Although your classes may not have any direct dependencies on the ORM, their behaviour and shape will be restricted by the ORM tool and the database it is mapping to. Changing your ORM is as significant a change as changing your database provider. There will always be features in one ORM that are not available in another and your domain classes will reflect the availability or absence of features.
In NHibernate, you are required to mark all public or protected class members as virtual to enable support for lazy-loading. This restriction, though not significantly changing my domain layer, has had an impact on its design.