AOP, DataMappers, and Factories, can they work together? - c#

Platform: C# 2.0
Using: Castle.DynamicProxy2
I have been struggling for about a week now trying to find a good strategy to rewrite my DAL. I tried NHibernate and, unfortunately, it was not a good fit for my project. So, I have come up with this interaction thus far:
I first start with registering my DTO's and my data mappers:
MetaDataMapper.RegisterTable(typeof(User)):
MapperLocator.RegisterMapper(typeof(User), typeof(UserMapper));
This maps each DTO as it is registered using custom attributes on the properties of the DTO essentially:
[Column(Name = "UserName")]
I then have a Mapper that belongs to each DTO, so for this type it would be UserMapper. This data mapper handles calling my ADO.Net wrapper and then mapping the result to the DTO. I however am in the process of enabling deep loading and subsequently lazy loading and thus where I am stuck. Basically my User DTO may have an Address object (FK) which requires another mapper to populate that object but I have to determine to use the AddressMapper at run time.
My problem is handling the types without having to explicitly go through a list of them (not to mention the headache of always having to keep that list updated) each time I need to determine which mapper to return. So my solution was having a MapperLocator class that I register with (as above) and return an IDataMapper interface that all of my data mappers implement. Then I can just cast it to type UserMapper if I am dealing with User objects. This however is not so easy when I am trying to determine the type of Data Mapper to return during run time. Since generics have to know what they are at compile time, using AOP and then passing in the type at run time is not an option without using reflection. I am already doing a fair bit of reflection when I am mapping the DTO's to the table, reading attributes and such. Plus my MapperFactory uses reflection to instantiate the proper data mapper. So I am trying to do this without reflection to keep those expensive calls down as much as possible.
I thought the solution could be found in passing around an interface, but I have yet to be able to implement that idea. So then I thought the solution would possibly be in using delegates, but I have no idea how to implement that idea either. So...frankly...I am lost, can you help, please?

I will suggest a couple of things.
1) Don't prematurely optimize. If you need to use reflection to instantiate your *Mappers, do it with reflection. Look at the headache you're causing yourself by not doing it that way. If you have problems later, than profile them to see if there's faster ways of doing it.
2) My question to you would be, why are you trying to implement your own DAL framework? You say that NHibernate isn't a good fit, but you don't elaborate on that. Have you tried any of the dozens of other ORM's? What's your criteria? Your posted code looks remarkably like Linq2Sql's mapping.
Lightspeed and SubSonic are both great lightweight ORM packages. Linq2Sql is an easy-to-use mapper, and of course there's Microsoft's Entity Framework, which has a whole team at Microsoft dealing with the problems you're describing.
You might save yourself a lot of time, especially maintenance wise, by looking at these rather than implementing it yourself. I would highly recommend any of those that I mentioned.

Related

When using protobuf-net, how do I know what fields will be updated (or have been updated) when using merge on an existing object

Using Protobuf-net, I want to know what properties of an object have been updated at the end of a merge operation so that I can notify interested code to update other components that may relate to those updated properties.
I noticed that there are a few different types of properties/methods I can add which will help me serialize selectively (Specified and ShouldSerialize). I noticed in MemberSpecifiedDecorator that the ‘read’ method will set the specified property to true when it reads. However, even if I add specified properties for each field, I’d have to check each one (and update code when new properties were added)
My current plan is to create a custom SerializationContext.context object, and then detect that during the desearalization process – and update a list of members. However… there are quite a few places in the code I need to touch to do that, and I’d rather do it using an existing system if possible.
It is much more desirable to get a list of updated member information. I realize that due to walking down an object graph that may result in many members, but in my use case I’m not merging complex objects, just simple POCO’s with value type properties.
Getting a delta log isn't an inbuilt feature, partly because of the complexity when it comes to complex models, as you note. The Specified trick would work, although this isn't the purpose it was designed for - but to avoid adding complexity to your own code,that would be something best handled via reflection, perhaps using the Expression API for performance. Another approach might be to use a ProtoReader to know in advance which fields will be touched, but that demands an understanding of the field-number/member map (which can be queried via RuntimeTypeModel).
Are you using habd-crafted models? Or are you using protogen? Yet another option would be to have code in the setters that logs changes somewhere. I don't think protogen currently emits partial method hooks, but it possibly could.
But let me turn this around: it isn't a feature that is built in right now, and it is somewhat limited due to complexity anyway, but: what would a "good" API for this look like to you?
As a side note: this isn't really a common features in serializers - you'd have very similar challenges in any mainstream serializer that I can think of.

Create wrapper to hide implementation details for data structures

I have some integrations (like Salesforce) that I would like to hide behind a product-agnostic wrapper (like a CrmService class instead of SalesforceService class).
It seems simple enough that I can just create a CrmService class and use the SalesforceService class as an implementation detail in the CrmService, however, there is one problem. The SalesforceService uses some exceptions and enums. It would be weird if my CrmService threw SalesforceExceptions or you were required to use Salesforce enums.
Any ideas how I can accomplish what I want cleanly?
EDIT: Currently for exceptions, I am catching the Salesforce one and throwing my own custom one. I'm not sure what I should do for the enums though. I guess I could map the Salesforce enums to my own provider-agnostic ones, but I'm looking for a general solution that might be cleaner than having to do this mapping. If that is my only option (to map them), then that is okay, just trying to get ideas.
The short answer is that you are on the right track, have a read through the Law of Demeter.
The fundamental notion is that a given object should assume as
little as possible about the structure or properties of anything else
(including its subcomponents), in accordance with the principle of
"information hiding".
The advantage of following the Law of Demeter is that the resulting
software tends to be more maintainable and adaptable. Since objects
are less dependent on the internal structure of other objects, object
containers can be changed without reworking their callers.
Although it may also result in having to write many wrapper
methods to propagate calls to components; in some cases, this can
add noticeable time and space overhead.
So you see you are following quite a good practise which I do generally follow myself, but it does take some effort.
And yes you will have to catch and throw your own exceptions and map enums, requests and responses, its a lot of upfront effort but if you ever have to change out Salesforce in a few years you will be regarded a hero.
As with all things software development, you need to way up the effort versus the benefit you will gain, if you think you are likely never to change out salesforce? then is it really needed? ... for you to decide.
To make use of good OOP practices, I would create a small interface ICrm with the basic members that all your CRM's have in common. This interface will include the typical methods like MakePayment(), GetPayments(), CheckOrder(), etc. Also create the Enums that you need like OrderStatus or ErrorType, for example.
Then create and implement your specific classes implementing the interface, e.g. class CrmSalesForce : ICrm. Here you can convert the specific details to this particular CRM (SalesForce in that case) to your common ICrm. Enums can be converted to string and the other way around if you have to (http://msdn.microsoft.com/en-us/library/kxydatf9(v=vs.110).aspx).
Then, as a last step, create your CrmService class and use in it Dependency Injection (http://msdn.microsoft.com/en-us/library/ff921152.aspx), that's it, pass a type of ICrm as a parameter in its constructor (or methods if you prefer to) . That way you keep your CrmService class quite cohesive and independent, so you create and use different Crm's without the need to change most of your code.

Mapping Objects

what is the best Solution for mapping class object to lightweight class object by example:
Customer to CustomerDTO both have the same properties names, i was thinking of the best optimized solution for mapping between them , i know reflection slow me down badly , and making methods for every mapping is time consuming so any idea ?
thanks in advance.
AutoMapper. There's also ValueInjecter and Emit Mapper.
If reflection is slowing you down too much, try Fasterflect: http://www.codeproject.com/KB/library/fasterflect_.aspx
If you use the caching mechanism, it is not much slower than hand-written code.
I've been playing about with this, and have the following observations. Should Customer inherit from CustomerDTO or read/write to a CustomerDTO? I've found that some DTO generators only generate dumb fixed size array collections for vectors of data items within the DTO others will allow you to specify a LIST<> or some such collection. The high-level collection does not need to appear in the serialised DTO but effects which approach you take. If your solution adds high-level collections then you can inherit if it doesn't then you probably want to read/write to a intermediate DTO.
I've used Protocol Buffers and XSDObjectGenerator for my DTO Generation (at different times!).
A new alternative is UltraMapper.
Is faster than anything i tried up to Feb-2017. (2x faster than Automapper in any scenario)
It is more reliable than AutoMapper (no StackOverflows, no depth limits, no self-reference limits).
UtraMapper is just 1300 lines of code instead of more than 4500+ of Automapper and it is easier to understand, maintain and contribute to the project.
It is actively developed but at this moment it needs community review.
Give it a try and leave a feedback on the page project!.

Repository pattern with lazying loading using POCO

I'm in the process of starting a new project and creating the business objects and data access etc. I'm just using plain old clr objects rather than any orms. I've created two class libraries:
1) Business Objects - holds all my business objects, all this objects are light weight with only properties and business rules.
2) Repository - this is for all my data access.
The majority of my objects will have child list in and my question is what is the best way to lazy load these values as I don't want to bring back unnecessary information if I dont need to.
I've thought about when using the "get" on the child property to check if its "null" and if it is call my repository to get the child information. This has two problems from what I can see:
1) The object "knows" how to get itself I would rather no data access logic be held in the object.
2) This required both classes to reference each other which in visual studio throws a circular dependency error.
Does anyone have any suggestions on how to overcome this issue or any recommendations on my projects layout and where it can be improved?
Thanks
To do this requires that you program to interfaces (abstractions over implementations) and/or declare your properties virtual. Then your repository returns a proxy object for those properties that are to be loaded lazily. The class that calls the repository is none the wiser, but when it tries to access one of those properties, the proxy calls the database and loads up the values.
Frankly, I think it is madness to try to implement this oneself. There are great, time-tested solutions to this problem out there, that have been developed and refined by the greatest minds in .NET.
To do the proxying, you can use Castle DynamicProxy, or you can use NHibernate and let it handle all of the proxying and lazy loading for you (it uses DynamicProxy). You'll get better performance than out of any hand-rolled implementations, guaranteed.
NHibernate won't mess with your POCOs -- no attributes, no base classes; you only need to mark members virtual to allow proxy generation.
Simply put, I'd reconsider using an ORM, especially if you want that lazy loading; you don't have to give up your POCOs.
After looking into the answers provided and further research I found an article that uses delegates for the lazy loading. This provided a simpler solution than using proxies or implementing NHibernate.
Here's the link to the article.
If you are using Entity Framework 4.0, you will have support for POCO's with deferred loading & will allow you to write a generic repository to do data access.
There are tons of article online on generic repository pattern with EF 4.0
HTH.
You can get around the circular dependency issue if your lazy loading code loads the repository at runtime (Activator.CreateInstance or something similar) and then calls the appropriate method via reflection. Of course there are performance penalties associated with reflection, but often turn out be insignificant in most solutions.
Another way to solve this problem is to simply compile to a single dll - here you can still logically separate your layers using different namespaces, and still organise your classes by using different directories.

What are the 'big' advantages to have Poco with ORM?

One advantage that comes to my mind is, if you use Poco classes for Orm mapping, you can easily switch from one ORM to another, if both support Poco.
Having an ORM with no Poco support, e.g. mappings are done with attributes like the DataObjects.Net Orm, is not an issue for me, as also with Poco-supported Orms and theirs generated proxy entities, you have to be aware that entities are actually DAO objects bound to some context/session, e.g. serializing is a problem, etc..
POCO it's all about loose coupling and testability.
So when you are doing POCO you can test your Domain Model (if your're doing DDD for example) in isolation. You don't have to bother about how it is persisted. You don't need to stub contexts/sessions to test your domain.
Another advantage is that there is less leaky abstractions. Because persistance concerns are not pushed to domain layer. So you are enforcing the SRP principle.
The third advantage I can see is that doing POCO your Domain Model is more evolutive and flexible. You can add new features easier than if it was coupled to the persistance.
I use POCO when I'm doing DDD for example, but for some kind of application you don't need to do DDD (if you're doing small data based applications) so the concerns are not the same.
Hope this helps
None. Point. All advantages people like throwing around are advantages that are not important in the big scale of the picture. I rather prefer a strong base class for entity objects that actually holds a lot of integrated code (like throwing property change events when properties change) than writing all that stuff myself. Note that I DID write a (at that time commercially available) ORM for .NET before "LINQ" or "ObjectSpaces" even were existing. I've used O/R mappers like for 15 years now, and never found a case where POCO was really something that was worth the possible trouble.
That said, attributes MAY be bad for other reasons. I rather prefer the Fluent NHibernate approach these days - having started my own (now retired) mapper with attributes, then moved to XML based files.
The "POCO gets me nothing" theme mostly comes from the point that Entities ARE SIMPLY NOT NORMAL OBJECTS. They have a lot of additional functionality as well as limitations (like query speed etc.) that the user should please be aware of anyway. ORM's, despite LINQ, are not replacable anyway - noit if you start using their really interesting higher features. So, at the end you get POCO and still are suck with a base class and different semantics left and right.
I find that most proponents of POCO (as in: "must have", not "would be nice") normally have NOT thought their arguments to the real end. You get all kinds of pretty crappy thoughts, pretty much on the level of "stored procedures are faster than dynamic SQL" - stuff that simply does not hold true. Things like:
"I want to have them in cases where they do not need saving ot the database" (use a separate object pool, never commit),
"I may want to have my own functionality in a base class (the ORM should allos abstract entity classed without functionality, so put your OWN base class below the one of the ORM)
"I may want to replace the ORM with another one" (so never use any higher functionality, hope the ORM API is compatible and then you STILL may have to rewrite large parts).
In general POCO people also overlook the hugh amount of work that acutally is to make it RIGHT - with stuff like transactional object updates etc. there is a TON of code in the base class. Some of the .NET interfaces are horrific to implement on a POCO level, though a lot easier if you can tie into the ORM.
Take the post of Thomas Jaskula here:
POCO it's all about loose coupling and
testability.
That assumes you can test databinding without having it? Testability is mock framework stuff, and there are REALLY Powerful ones that can even "redirect" method calls.
So when you are doing POCO you can
test your Domain Model (if you're
doing DDD for example) in isolation.
You don't have to bother about how it
is persisted. You don't need to stub
contexts/sessions to test your domain.
Actually not true. Persistence should be part of any domain model test, as the domain model is there to be persisted. You can always test non-persistent scenarios by just not committing the changes, but a lot of the tests will involve persistence and the failure of that (i.e. invoices with invalid / missing data re not valid to be written to disc, for example).
Another advantage is that there is
less leaky abstractions. Because
persistance concerns are not pushed to
domain layer. So you are enforcing the
SRP principle.
Actually no. A proper Domain model will never have persistence methods in the entities. This is a crap ORM to start with (user.Save ()). OTOH the base class will to things like validation (IDataErrorInfo), handle property update events on persistent filed and in general save you a ton of time.
As I said before, some of the functionality you SHOULD have is really hard to implement with variables as data store - like the ability to put an entity into an update mode, do some changes, then roll them back. Not needed - tell that Microsoft who use that if available in their data grids (you can change some properties, then hit escape to roll back changes).
The third advantage I can see is that
doing POCO your Domain Model is more
evolutive and flexible. You can add
new features easier than if it was
coupled to the persistance.
Non-argument. You can not play around adding fields to a peristet class without handling the persistence, and you can add non-persistent features (methods) to a non-poco class the same as to a poco class.
In general, my non-POCO base class did the following:
Handle property updates and IDataErrorInfo - without the user writing a line of code for fields and items the ORM could handle.
Handle object status information (New, Updated etc.). This is IMHO intrinsic information that also is pretty often pushed down to the user interface. Note that this is not a "save" method, but simply an EntityStatus property.
And it contained a number of overridable methods that the entity could use to extend the behavior WITHOUT implementing a (public) interface - so the methods were really private to the entity. It also had some more internal properties like to get access to the "object manager" responsible for the entity, which also was the point to ask for other entities (submit queries), which sometimes was needed.
POCO support in an ORM is all about separation of concerns, following the Single Responsibility Principle. With POCO support, an ORM can talk directly to a domain model without the need to "muddy" the domain with data-access specific code. This ensures the domain model is designed to solve only domain-related problems and not data-access problems.
Aside from this, POCO support can make it easier to test the behaviour of objects in isolation, without the need for a database, mapping information, or even references to the ORM assemblies. The ability to have "stand-alone" objects can make development significantly easier, because the objects are simple to instantiate and easy to predict.
Additionally, because POCO objects are not tied to a data-source, you can treat them the same, regardless of whether they have been loaded from your primary database, an alternative database, a flat file, or any other process. Although this may not seem immediately beneficial, treating your objects the same regardless of source can make behaviour easy to predict and to work with.
I chose NHibernate for my most recent ORM because of the support for POCO objects, something it handles very well. It suits the Domain-Driven Design approach the project follows and has enabled great separation between the database and the domain.
Being able to switch ORM tools is not a real argument for POCO support. Although your classes may not have any direct dependencies on the ORM, their behaviour and shape will be restricted by the ORM tool and the database it is mapping to. Changing your ORM is as significant a change as changing your database provider. There will always be features in one ORM that are not available in another and your domain classes will reflect the availability or absence of features.
In NHibernate, you are required to mark all public or protected class members as virtual to enable support for lazy-loading. This restriction, though not significantly changing my domain layer, has had an impact on its design.

Categories