I'm currently implementing a LINQ provider for my own educational purposes. I have managed to get Count() extension to work recently, so far so good.
Now my question is not a cry for help, but just a request for some clarification.
There are two interfaces to be implemented in order to create the provider: IQueryProvider and something like IOrderedQueryable<>. MSDN makes clear how one implements them, but one point is still confusing me.
Why these interfaces are implemented by separate classes, even though each IOrderedQueryable instance refers own IQueryProvider instance and both objects actually (indirectly) refer the same data?
Do they really need to be separated?
Furthermore. I am able to combine them like this: class Source<RowContract> : IQueryProvider, IOrderedQueryable<RowContract> - in order to simplify type information access. This implementation works properly now and looks more simple and clear than "separate-classes" approach.
I am wondering if there is a flaw in my combined implementation. Or, maybe it's valid?
Any explanation would be appreciated greatly.
As mentioned on msdn IQueryProvider is focused on creating and executing the query. Whereas IQueryable is the thing being queried. Rolling it all together may put similar code together, but it ultimately doesn't respect separation of concerns.
Related
Examples, like SandwichBot, use Chain.From to return the IDialog<T> for SendAsync, like this:
internal static IDialog<SandwichOrder> MakeRootDialog()
{
return Chain.From(() => FormDialog.FromForm(SandwichOrder.BuildForm));
}
I can see that Chain.From pushes and pops the IFormDialog<T>, returned from FormDialog.FromForm, but am not sure what the benefit of that is. However, the chatbot still works without Chain.From, as shown below:
internal static IDialog<SandwichOrder> MakeRootDialog()
{
return FormDialog.FromForm(SandwichOrder.BuildForm);
}
Since the examples use Chain.From, it makes me think that it might be somehow required or recommended. What is the rationale for Chain.From, where would it be required, and what are the drawbacks to the simpler syntax without it?
In the SimpleSandwichBot I believe that it doesn't make sense to have the Chain.From, however I suspect that was done to allow a seamless transition to the AnnotatedSandwichBot where the Chain is being used a bit more.
Personally I don't use Chain a lot unless I need to put together something really simple and I don't want to create a dialog as it could easily become complex to read/follow.
With Chain you can manage the stack of dialogs implicitly. However, explicit management of the stack of dialogs (using Call/Done) seems to be better to composing larger conversations. Creating new dialogs it's more verbose (especially in C#) but I believe it allows to organize better the solution and the code.
I don't think there is a place where Chain is required as it's not providing anything unique, just a fluent interface that is usable in LINQ query syntax.
The drawbacks I see are mainly around complexity of the resulting code if you are trying to create something big. If I don't misremember, there is also a chance of getting a serialization issue depending how you are using it.
From the docs:
The Chain methods provide a fluent interface to dialogs that is usable in LINQ query syntax. The compiled form of LINQ query syntax often leverages anonymous methods. If these anonymous methods do not reference the environment of local variables, then these anonymous methods have no state and are trivially serializable. However, if the anonymous method captures any local variable in the environment, the resulting closure object (generated by the compiler) is not marked as serializable. The Bot Builder will detect this situation and throw a ClosureCaptureException to help diagnose the issue.
There is this generic repository implementation
http://www.itworld.com/development/409087/generic-repository-net-entity-framework-6-async-operations
By the looks of it , it seems that i can just have a single generic repository for my whole project and for almost all of the entities in the database it will work fine. For the ones that it doesn't , i can create a more specific repository , e.g. MembershipRepository which derives from the base repository and overrides the methods as needed, such as Find for example.
Now one could also write a generic service class too.... similar to the above, and then creating only a few more specific services.
That will drastically reduce the project size. No need to write redundant repositories per entity, and a much smaller number of service layer classes.
Surely it can't be that simple. Is there a catch to this? Let's ignore for a moment that EntityFramework has the repository+UOW pattern built in and repository pattern isn't needed.
We do.
I am torn about it honestly. For smaller domains its perfectly fine and works a treat. For larger ones (like the one I am working with currently), your repository can never really be generic enough to warrant a single one.
For example, the generic repository in the code base that I currently work with is now littered with all sorts of very specific methods for things like eager fetching, paging, etc. Its much more than what it started out as. Looking back at the revision history, it once only had GetAll, GetById, Create and Update methods. Now it has things like GetAllEagerFetch with overloads for various JOIN types, GetAllPaged, GetAllPagedEagerFetch, DeleteById, ExecuteStoredProcedure, ExecuteSql (yuck), etc. There is a lot more.
One way around this is to perhaps follow the Interface Segregation Principle so that your repository can be huge and generic but consumers only care about what they need to care about. I don't particularly like that though.
That being said - we have moved away from a Repository-style setup in more recent projects. We prefer a CQRS setup now with Command and Query objects that have a specific purpose. This leans more towards the Single Responsibility Principle instead (doesn't follow it to the "Uncle Bob degree".. but the classes have some well defined responsibilities).
I have some integrations (like Salesforce) that I would like to hide behind a product-agnostic wrapper (like a CrmService class instead of SalesforceService class).
It seems simple enough that I can just create a CrmService class and use the SalesforceService class as an implementation detail in the CrmService, however, there is one problem. The SalesforceService uses some exceptions and enums. It would be weird if my CrmService threw SalesforceExceptions or you were required to use Salesforce enums.
Any ideas how I can accomplish what I want cleanly?
EDIT: Currently for exceptions, I am catching the Salesforce one and throwing my own custom one. I'm not sure what I should do for the enums though. I guess I could map the Salesforce enums to my own provider-agnostic ones, but I'm looking for a general solution that might be cleaner than having to do this mapping. If that is my only option (to map them), then that is okay, just trying to get ideas.
The short answer is that you are on the right track, have a read through the Law of Demeter.
The fundamental notion is that a given object should assume as
little as possible about the structure or properties of anything else
(including its subcomponents), in accordance with the principle of
"information hiding".
The advantage of following the Law of Demeter is that the resulting
software tends to be more maintainable and adaptable. Since objects
are less dependent on the internal structure of other objects, object
containers can be changed without reworking their callers.
Although it may also result in having to write many wrapper
methods to propagate calls to components; in some cases, this can
add noticeable time and space overhead.
So you see you are following quite a good practise which I do generally follow myself, but it does take some effort.
And yes you will have to catch and throw your own exceptions and map enums, requests and responses, its a lot of upfront effort but if you ever have to change out Salesforce in a few years you will be regarded a hero.
As with all things software development, you need to way up the effort versus the benefit you will gain, if you think you are likely never to change out salesforce? then is it really needed? ... for you to decide.
To make use of good OOP practices, I would create a small interface ICrm with the basic members that all your CRM's have in common. This interface will include the typical methods like MakePayment(), GetPayments(), CheckOrder(), etc. Also create the Enums that you need like OrderStatus or ErrorType, for example.
Then create and implement your specific classes implementing the interface, e.g. class CrmSalesForce : ICrm. Here you can convert the specific details to this particular CRM (SalesForce in that case) to your common ICrm. Enums can be converted to string and the other way around if you have to (http://msdn.microsoft.com/en-us/library/kxydatf9(v=vs.110).aspx).
Then, as a last step, create your CrmService class and use in it Dependency Injection (http://msdn.microsoft.com/en-us/library/ff921152.aspx), that's it, pass a type of ICrm as a parameter in its constructor (or methods if you prefer to) . That way you keep your CrmService class quite cohesive and independent, so you create and use different Crm's without the need to change most of your code.
I had seen somewhere the builder pattern used to create DML SQL statements. I was wondering, which pattern(s) would be (more) appropriate for building SQL DDL statements.
I am thinking about simple program (DB tool, solely for self-education purposes), that would create (dynamically) simple SQL DDL statements. I am not sure which patterns should I consider.
The factory pattern allows me to decouple client code from concrete database provider classes library. I suppose it's a clear choice here (please correct me if I am wrong). The decorator pattern was my first choice for building sql statements, but after coding some examples and then reading this answer I am almost sure, I shouldn't be using decorator, as I am building objects and not decorating already created objects.
So.. which patterns should I consider and why are they good/better in this case?
Updated for clarification.
The "core" of your DB tool needs to represent the knowledge it possesses in a DB-independent form, and when the time comes a database-specific code takes over and translates that knowledge into DB-specific DDL.
You'll probably need some sort of dependency injection to accomplish that. The basic idea is this: the core of your application works with interfaces only and never knows anything beyond what is declared in these interfaces. At run-time, DB-specific objects implementing these interfaces are instantiated and "injected" into the core. Create than blindly "calls" them as it would any other set of objects implementing the same set of interfaces.
If another DB needs to be supported, juts make another set of classes that implement these interfaces and instantiate them in run-time.
Of course this is all just a theory. You'll find that there are many nuances between different database systems and I suspect it will be hard for you to represent all that in a completely generalized way...
Platform: C# 2.0
Using: Castle.DynamicProxy2
I have been struggling for about a week now trying to find a good strategy to rewrite my DAL. I tried NHibernate and, unfortunately, it was not a good fit for my project. So, I have come up with this interaction thus far:
I first start with registering my DTO's and my data mappers:
MetaDataMapper.RegisterTable(typeof(User)):
MapperLocator.RegisterMapper(typeof(User), typeof(UserMapper));
This maps each DTO as it is registered using custom attributes on the properties of the DTO essentially:
[Column(Name = "UserName")]
I then have a Mapper that belongs to each DTO, so for this type it would be UserMapper. This data mapper handles calling my ADO.Net wrapper and then mapping the result to the DTO. I however am in the process of enabling deep loading and subsequently lazy loading and thus where I am stuck. Basically my User DTO may have an Address object (FK) which requires another mapper to populate that object but I have to determine to use the AddressMapper at run time.
My problem is handling the types without having to explicitly go through a list of them (not to mention the headache of always having to keep that list updated) each time I need to determine which mapper to return. So my solution was having a MapperLocator class that I register with (as above) and return an IDataMapper interface that all of my data mappers implement. Then I can just cast it to type UserMapper if I am dealing with User objects. This however is not so easy when I am trying to determine the type of Data Mapper to return during run time. Since generics have to know what they are at compile time, using AOP and then passing in the type at run time is not an option without using reflection. I am already doing a fair bit of reflection when I am mapping the DTO's to the table, reading attributes and such. Plus my MapperFactory uses reflection to instantiate the proper data mapper. So I am trying to do this without reflection to keep those expensive calls down as much as possible.
I thought the solution could be found in passing around an interface, but I have yet to be able to implement that idea. So then I thought the solution would possibly be in using delegates, but I have no idea how to implement that idea either. So...frankly...I am lost, can you help, please?
I will suggest a couple of things.
1) Don't prematurely optimize. If you need to use reflection to instantiate your *Mappers, do it with reflection. Look at the headache you're causing yourself by not doing it that way. If you have problems later, than profile them to see if there's faster ways of doing it.
2) My question to you would be, why are you trying to implement your own DAL framework? You say that NHibernate isn't a good fit, but you don't elaborate on that. Have you tried any of the dozens of other ORM's? What's your criteria? Your posted code looks remarkably like Linq2Sql's mapping.
Lightspeed and SubSonic are both great lightweight ORM packages. Linq2Sql is an easy-to-use mapper, and of course there's Microsoft's Entity Framework, which has a whole team at Microsoft dealing with the problems you're describing.
You might save yourself a lot of time, especially maintenance wise, by looking at these rather than implementing it yourself. I would highly recommend any of those that I mentioned.