I have some integrations (like Salesforce) that I would like to hide behind a product-agnostic wrapper (like a CrmService class instead of SalesforceService class).
It seems simple enough that I can just create a CrmService class and use the SalesforceService class as an implementation detail in the CrmService, however, there is one problem. The SalesforceService uses some exceptions and enums. It would be weird if my CrmService threw SalesforceExceptions or you were required to use Salesforce enums.
Any ideas how I can accomplish what I want cleanly?
EDIT: Currently for exceptions, I am catching the Salesforce one and throwing my own custom one. I'm not sure what I should do for the enums though. I guess I could map the Salesforce enums to my own provider-agnostic ones, but I'm looking for a general solution that might be cleaner than having to do this mapping. If that is my only option (to map them), then that is okay, just trying to get ideas.
The short answer is that you are on the right track, have a read through the Law of Demeter.
The fundamental notion is that a given object should assume as
little as possible about the structure or properties of anything else
(including its subcomponents), in accordance with the principle of
"information hiding".
The advantage of following the Law of Demeter is that the resulting
software tends to be more maintainable and adaptable. Since objects
are less dependent on the internal structure of other objects, object
containers can be changed without reworking their callers.
Although it may also result in having to write many wrapper
methods to propagate calls to components; in some cases, this can
add noticeable time and space overhead.
So you see you are following quite a good practise which I do generally follow myself, but it does take some effort.
And yes you will have to catch and throw your own exceptions and map enums, requests and responses, its a lot of upfront effort but if you ever have to change out Salesforce in a few years you will be regarded a hero.
As with all things software development, you need to way up the effort versus the benefit you will gain, if you think you are likely never to change out salesforce? then is it really needed? ... for you to decide.
To make use of good OOP practices, I would create a small interface ICrm with the basic members that all your CRM's have in common. This interface will include the typical methods like MakePayment(), GetPayments(), CheckOrder(), etc. Also create the Enums that you need like OrderStatus or ErrorType, for example.
Then create and implement your specific classes implementing the interface, e.g. class CrmSalesForce : ICrm. Here you can convert the specific details to this particular CRM (SalesForce in that case) to your common ICrm. Enums can be converted to string and the other way around if you have to (http://msdn.microsoft.com/en-us/library/kxydatf9(v=vs.110).aspx).
Then, as a last step, create your CrmService class and use in it Dependency Injection (http://msdn.microsoft.com/en-us/library/ff921152.aspx), that's it, pass a type of ICrm as a parameter in its constructor (or methods if you prefer to) . That way you keep your CrmService class quite cohesive and independent, so you create and use different Crm's without the need to change most of your code.
Related
In my self-directed efforts to bring my programming skills and habits into the 21st Century (migrating from Pascal & Fortran to C# and C++), I've been studying a fair amount of available source code. So far as I have been able to determine, Classes are unique 'standalone' entities (much like their Function ancestors).
I have, however, run across numerous instances where one or more Classes are nested within another Class. My 'gut instinct' in this regard is that doing so is simply due to extremely poor methodology - however, I'm not yet familiar enough with modern OOP methodology to truly make such a determination.
Hence, the following overlapping questions:
Is there legitimate reasoning for nesting one Class inside another? And, if so, what is the rationale in doing so as opposed to each Class being wholly independent?
(Note: The examples I've seen have been using C#, but it seems that this aspect applies equally to C++.)
Hmm this might call for a very suggestive answer and therefore many will disagree with my answer. I am one of those people who believe that there is no real need for nested classes and I am tended to agree with your statement:
My 'gut instinct' in this regard is that doing so is simply due to extremely poor methodology
Cases where people feel the need to design nested classes is where functionality that is tightly coupled to the behavior designed in the outer class. E.g. event handling can be designed in an inner class, or the Threading behavior can find its way to inner classes.
I rather refactor specific behavior out of the 'outer' class so that I end up with two smaller classes that both have clear responsibilities.
To me the main drawback of designing inner classes is that they tend to clutter functionality which is hard(er) to use with principals as TDD (test driven development).
If you are not relying on test driven principals, I think it will not harm you a lot. It is (like so many things) a matter of taste, not a matter of right or wrong. I learned that the topic can lead to long and exhausting discussion. Quite similar to whether you should use static helper classes that tend to do more than just 'be a helper' and are getting more and more state over time.
The discussions can become a lot more concrete if you ever run into a real life example. Until then it will be mostly people's "gut feeling".
Common uses for nested classes in C# include classes that are for internal use by your class that ou don't want exposed in your module's internal namespace. For example, you may need to throw an exception that you don't expose to the outside world. In that case you would want to make it a nested class so that others can't use it.
Another example is a binary tree's Node class. Not only would it be something you wouldn't want exposed outside of your class, but it might need access to private members for internal operations.
One area where you encounter nested classes quite frequently (although perhaps without noticing) is enumerators (i.e. classes implementing IEnumerator<T>).
In order for a collection to support multiple simultaneous enumerations (e.g. on multiple threads, or even just nested foreach loops over the same collection), the enumeration logic needs to be separated from the collection into another class.
However, enumerators often need specific knowledge of the implementation details of a collection in order to work correctly. If you were to do this with an entirely separate (non-nested) class, you'd have to make those internals more accessible than they really should be, thus breaking encapsulation. (One could perhaps argue that it could be done using internal members, but I think this still breaks encapsulation to some extent, even if it is "only" exposing implementation details to the members of the same assembly).
By using a private nested class for the enumerator, these problems go away whilst maintaining proper encapsulation, since the nested class can access the internals of its containing class.
There are pros and cons with having nested classes. The pros center around deliberate uses such as IEnumerator<T> that Iridium mentioned and nesting classes to facilitate the Command Pattern in WPF/MVVM. SQL Create, Read, Update and Delete (CRUD) operations are popular to encapsulate in classes/commands in code using these patterns for Line of Business / Data-driven apps.
Negatives associated with nesting classes are associated with debugging - the more classes you nest, the greater the complexity, the easier it is introduce bugs, the harder it is to debug it all. Even using something like the Command Pattern, as organizing as it is, can bloat your Class with too many classes/commands.
Whenever I want to stub a method in an otherwise trivial class, I most often extract an interface.
Now if the constructor of that class is public and isn't too complex or dependent on complex types, it would have the same effect to just make the method in question virtual and inherit.
Is this preferable over extracting an interface? If so, why?
Edit:
class Parser
{
public IDictionary<string, int> DoLengthyParseTask(Stream s)
{
// is slow even with using memory stream
}
}
There are two ways: Either extract an interface or make the method virtual. I actually prefer interfaces, but that could lead to an explosion of IParser Parser tuples...
You need to consider what you are trying to accomplish outside of your unit testing. Do not let your tool dictate your design.
Dealing in interfaces can help decouple your code, but these should be natural points of separation in your code (e.g. business logic or data access). Making methods virtual makes sense if you are going to inherit and overwrite those methods.
In your case, I would attempt to test the behavior that uses DoLengthyParseTask and not the method directly. This will provide a more robust test suite as well. You need to carefully consider whether this method really needs to be public(meaning it can and should be referenced outside its own assembly).
Interfaces just make a contract for you, basically a promise that your implementation will provide access to a specified set of contact points (methods, properties, etc), with no specification of behaviour. You are free to do whatever you want as long as you honor the promise.
A base class on the other hand, in addition of a contract, specifies at least some behaviour that is coded in the class (unless everything is abstract, but that is another story). Making a method virtual still enables you to call in the implementation of the base, and still provide your own code along with it.
This inheritance of behaviour is basically the reason why multiple inheritance is a no-no in modern OOP, and multiple interface implementation is relatively common.
That said, you need to weight whether you just want to extract a contract, or you want to extract some behaviour as well, and the answer should be obvious for a specific case.
As for the IParser / Parser pairs, first they are great for unit testing and for dependency injection, and second, they do not charge you for class creation, so feel free to create as many as you want.
By programming to an interface you get benefits of ease of mocking/stubbing in unit testing and loosely coupled code (and as a result, much higher flexibility), literally for free (the only drawback is more artifacts to manage).
Interfaces and inheritance are two separate things and it's not a good idea to use them interchangeably, even though it's possible. By marking method virtual you're essentially telling others not only they're free to change (override) this method in their implementations, but that you actually expect them to (and are you?).
Such design comes with rather heavy consequences, so unless you explicitly need it - you shouldn't use it. Try sticking to programming to interface instead.
One of good object oriented design principles state that you should program to an interface (design by contract, Liskov Substitution Principle) and prefer composition over inheritance (not only your classes should implement interfaces/abstract classes, but also consist of such implementations).
It's worth noticing that your Parser example makes perfect candidate to be hidden behind abstraction (be it interface or base class). From its consumer point of view it doesn't matter how the data is created - for now you might think it's XML stream only, but requirements tend to change (and/or grow), and you might soon find yourself implementing binary file parser, data stream mining parser and what-not-else. Do it properly now, save yourself time and trouble later.
I need to work on an application that consists of two major parts:
The business logic part with specific business classes (e.g. Book, Library, Author, ...)
A generic part that can show Books, Libraries, ... in data grids, map them to a database, ...).
The generic part uses reflection to get the data out of the business classes without the need to write specific data-grid or database logic in the business classes. This works fine and allows us to add new business classes (e.g. LibraryMember) without the need to adjust the data grid and database logic.
However, over the years, code was added to the business classes that also makes use of reflection to get things done in the business classes. E.g. if the Author of a Book is changed, observers are called to tell the Author itself that it should add this book to its collection of books written by him (Author.Books). In these observers, not only the instances are passed, but also information that is directly derived from the reflection (the FieldInfo is added to the observer call so that the caller knows that the field "Author" of the book is changed).
I can clearly see advantages in using reflection in these generic modules (like the data grid or database interface), but it seems to me that using reflection in the business classes is a bad idea. After all, shouldn't the application work without relying on reflection as much as possible? Or is the use of reflection the 'normal way of working' in the 21st century?
Is it good practice to use reflection in your business logic?
EDIT: Some clarification on the remark of Kirk:
Imagine that Author implements an observer on Book.
Book calls all its observers whenever some field of Book changes (like Title, Year, #Pages, Author, ...). The 'FieldInfo' of the changed field is passed in the observer.
The Author-observer then uses this FieldInfo to decide whether it is interested in this change. In this case, if FieldInfo is for the field Author of Book, the Author-Observer will update its own vector of Books.
The main danger with Reflection is that the flexibility can escalate into disorganized, unmaintainable code, particularly if more junior devs are used to make changes, who may not fully understand the Reflection code or are so enamored of it that they use it to solve every problem, even when simpler tools would suffice.
My observation has been that over-generalization leads to over-complication. It gets worse when the actual boundary cases turn out to not be accommodated by the generalized design, requiring hacks to fit in the new features on schedule, transmuting flexibility into complexity.
I avoid using reflection. Yes, it makes your program more flexible. But this flexibility comes at a high price: There is no compile-time checking of field names or types or whatever information you're collecting through reflection.
Like many things, it depends on what you're doing. If the nature of your logic is that you NEVER compare the field names (or whatever) found to a constant value, then using reflection is probably a good thing. But if you use reflection to find field names, and then loop through them searching for the fields named "Author" and "Title", you've just created a more-complex simulation of an object with two named fields. And what if you search for "Author" when the field is actually called "AuthorName", or you intend to search for "Author" and accidentally type "Auhtor"? Now you have errors that won't show up until runtime instead of being flagged at compile time.
With hard-coded field names, your IDE can tell you every place that a certain field is used. With reflection ... not so easy to tell. Maybe you can do a text search on the name, but if field names are passed around as variables, it can get very difficult.
I'm working on a system now where the original authors loved reflection and similar techniques. There are all sorts of places where they need to create an instance of a class and instead of just saying "new" and the class, they create a token that they look up in a table to get the class name. What does this gain? Yes, we could change the table to map that token to a different name. And this gains us ... what? When was the last time that you said, "Oh, every place that my program creates an instance of Customer, I want to change to create an instance of NewKindOfCustomer." If you have changes to a class, you change the class, not create a new class but keep the old one around for nostalgia.
To take a similar issue, I make a regular practice of building data entry screens on the fly by asking the database for a list of field names, types, and sizes, and then laying it out from there. This gives me the advantage of using the same program for all the simpler data entry screens -- just pass in the table name as a parameter -- and if a field is added or deleted, zero code change is required. But this only works as long as I don't care what the fields are. Once I start having validations or side effects specific to this screen, the system is more trouble than it's worth, and I'm better off to fall back to more explicit coding.
Based on your edit, it sounds like you are using reflection purely as a mechanism for identifying fields. This is as opposed to dynamic behavior such as looking up the fields, which should be avoided when possible (since such lookups usually use strings which ruin static type safety). Using FieldInfo to provide an identifier for a field is fairly harmless, though it does expose some internals (the info class) in a way that is not entirely ideal.
I tend not to use reflection where i can help it. by using interfaces and coding against these i can do a lot of things that some would use reflection for.
But im a big fan of if it works, it works.
Also by using reflection you probably have something that can adapt fairly easily.
Ie the only objection most would have is fairly religious ... and if your performance is fine and the code is maintainable and clear .... who cares?
Edit: based on your edit i would indeed use interfaces to achieve what you want. Unless i misunderstand you.
I think it is a good idea to stay away from Reflection when possible, but dont be afraid to resort to it when it provides a better or more flexible solution to your problem. The performance hit for anything but tight loop operations is likely to be minimal in the overall scheme of an application or Web Form request.
Just a good article to share about reflection -
http://www.simple-talk.com/dotnet/.net-framework/a-defense-of-reflection-in-.net/
I tend to use interfaces in my business layer and leave the reflection to my presentation layer. This is not an absolute but rather a guideline.
What is the best approach for defining Interfaces in either C# or Java? Do we need to make generic or add the methods as and when the real need arises?
Regards,
Srinivas
Once an interface is defined, it is intended to not be changed.
You have to be thoughtful about the purpose of the interface and be as complete as possible.
If you find the need, later, to add a method, really you should define a new interface, possibly a _V2 interface, with the additional method.
Addendum: Here you will find some good guidelines on interface design in C#, as part of a larger, valuable work on C# design in general. It generally applies to Java as well.
Excerpts:
Although most APIs are best modeled using classes and structs, there are cases in which interfaces are more appropriate or are the only option.
DO provide at least one type that is
an implementation of an interface.
This helps to validate the design of
the interface. For example,
System.Collections.ArrayList is an
implementation of the
System.Collections.IList interface.
DO provide at least one API consuming
each interface you define (a method
taking the interface as a parameter or
a property typed as the interface).
This helps to validate the interface
design. For example, List.Sort
consumes IComparer interface.
DO NOT add members to an interface that
has previously shipped. Doing so
would break implementations of the
interface. You should create a new
interface to avoid versioning
problems.
I recommend relying on the broad type design guidelines.
To quote Joshua Bloch:
When in doubt, leave it out.
You can always add to an interface later. Once a member is a part of your interface it is very difficult to change or remove it. Be very conservative in your creation of you interfaces as they are binding contracts.
As a side note here is an excellent interview with Vance Morrison (of the Microsoft CLR team) in which he mentions the possibility of a future version of the CLR allowing "mixins" or interfaces with default implementations for their members.
If your interface is part of code that is shared with other projects and teams, listen to Cheeso. But if your interface is part of a private project and you have access to all the change points then you probably didn't need interfaces to begin with but go ahead and change them.
If the interface is going to be public, I feel that a good deal of care needs to be put into the design because changes to the interface is going to be difficult if a lot of code is going to suddenly break in the next iteration.
Changes to the interface needs to be taken with care, therefore, it would be ideal if changes wouldn't have to be made after the initial release. This means, that the first iteration will be very important in terms of the design.
However, if changes are required, one way to implement the changes to the interface would be deprecate the old methods, and provide a transition path for old code to use the newly-designed features. This does mean that the deprecated methods will still stick around to prevent the code using the old methods from breaking -- this is not ideal, so it is a "price to pay" for not getting it right the first time around.
On a related matter, yesterday, I stumbled upon the Google Tech Talk: How to Design a Good API and Why It Matters by Joshua Bloch. He was behind the design and implementation of the Java Collection libraries and such, and is the author of Effective Java.
The video is about an hour long where he goes into details and examples about what makes a good API, why we should be making well-designed APIs, and more. It's a good video to watch to get some ideas and inspiration for certain things to look out for when thinking about designing APIs.
Adding methods later to an interface immediately breaks all implementations of the interface that didn't accidentaly implement those methods. For that reason, make sure your interface specification is complete. I'd propose you start with a (sample) client of the interface, the part that actually uses instances of classes implementing said interface. Whatever the client needs must be part of the interface (obviously). Then make a (sample) implementation of the interface and look what additional methods are both generally usefull and available (in possible other implementations) so they should also be part of the interface. Check for symetry completeness (e.g. if there is an "openXYZ", there should also be a "closeXYZ". if there is an "addFooBar", there should be a "removeFooBar". etc.)
If possible, let a coworker check your specification.
And: Be sure you really want an interface. Maybe an abstract base class is a better fit for your needs.
Well, it really depends on your particular situation. If your team is the sole user/maintainer of that interface, then by all means, modify it as you see fit and forget all about that "best practice blabla" kind of stuff. It is YOUR code after all... Never blindly follow best pracice stuff without understanding its rationale.
Now, if you're making a public API that other team or customer, will work with (think plugins, extension points or things like that) then you have to be conservative with what you put in the interface. As other mentionned, you may have to add _V2 kind of interface int these cases. Microsoft did with several web browser COM interfaces.
The guidelines Microsoft publishes in Framework Design Guidelines are just that: guideline for PUBLIC interface. Not for private internal stuff; tough many of them still apply. Know what applies or not to your situation.
No rule will make up for lack of common sense.
I have been reading that creating dependencies by using static classes/singletons in code, is bad form, and creates problems ie. tight coupling, and unit testing.
I have a situation where I have a group of url parsing methods that have no state associated with them, and perform operations using only the input arguments of the method. I am sure you are familiar with this kind of method.
In the past I would have proceeded to create a class and add these methods and call them directly from my code eg.
UrlParser.ParseUrl(url);
But wait a minute, that is introducing a dependency to another class. I am unsure whether these 'utility' classes are bad, as they are stateless and this minimises some of the problems with said static classes, and singletons. Could someone clarify this?
Should I be moving the methods to the calling class, that is if only the calling class will be using the method. THis may violate the 'Single Responsibilty Principle'.
From a theoretical design standpoint, I feel that Utility classes are something to be avoided when possible. They basically are no different than static classes (although slightly nicer, since they have no state).
From a practical standpoint, however, I do create these, and encourage their use when appropriate. Trying to avoid utility classes is often cumbersome, and leads to less maintainable code. However, I do try to encourage my developers to avoid these in public APIs when possible.
For example, in your case, I feel that UrlParser.ParseUrl(...) is probably better handled as a class. Look at System.Uri in the BCL - this handles a clean, easy to use interface for Uniform Resource Indentifiers, that works well, and maintains the actual state. I prefer this approach to a utility method that works on strings, and forcing the user to pass around a string, remember to validate it, etc.
Utility classes are ok..... as long as they don't violate design principles. Use them as happily as you'd use the core framework classes.
The classes should be well named and logical. Really they aren't so much "utility" but part of an emerging framwework that the native classes don't provide.
Using things like Extension methods can be useful as well to align functionality onto the "right" class. BUT, they can be a cause of some confusion as the extensions aren't packaged with the class they extend usually, which is not ideal, but, still, can be very useful and produce cleaner code.
You could always create an interface and use that with dependency injection with instances of classes that implement that interface instead of static classes.
The question becomes, is it really worth the effort? In some systems, the answer in yes, but in others, especially smaller ones, the answer is probably no.
This really depends on the context, and on how we use it.
Utility classes, itself, is not bad. However, It will become bad if we use it the bad way. Every design pattern (especially Singleton pattern) can easily be turned into anti-pattern, same goes for Utility classes.
In software design, we need a balancing between flexibility & simplicity. If we're going to create a StringUtils which is only responsible for string-manipulation:
Does it violate SRP (Single Responsibility Principle)? -> Nope, it's the developers that put too much responsibilities into utility classes that violate SRP.
"It can not be injected using DI frameworks" -> Are StringUtils implementation gonna varies? Are we gonna switch its implementations at runtime? Are we gonna mock it? Of course not.
=> Utility classes, themselve, are not bad. It's the developers' fault that make it bad.
It all really depends on the context. If you're just gonna create a utility class that only contains single responsibility, and is only used privately inside a module or a layer. Then you're still good with it.
I agree with some of the other responses here that it is the classic singleton which maintains a single instance of a stateful object which is to be avoided and not necessarily utility classes with no state that are evil. I also agree with Reed, that if at all possible, put these utility methods in a class where it makes sense to do so and where one would logically suspect such methods would reside. I would add, that often these static utility methods might be good candidates for extension methods.
I really, really try to avoid them, but who are we kidding... they creep into every system. Nevertheless, in the example given I would use a URL object which would then expose various attributes of the URL (protocol, domain, path and query-string parameters). Nearly every time I want to create a utility class of statics, I can get more value by creating an object that does this kind of work.
In a similar way I have created a lot of custom controls that have built in validation for things like percentages, currency, phone numbers and the like. Prior to doing this I had a Parser utility class that had all of these rules, but it makes it so much cleaner to just drop a control on the page that already knows the basic rules (and thus requires only business logic validation to be added).
I still keep the parser utility class and these controls hide that static class, but use it extensively (keeping all the parsing in one easy to find place). In that regard I consider it acceptable to have the utility class because it allows me to apply "Don't Repeat Yourself", while I get the benefit of instanced classes with the controls or other objects that use the utilities.
Utility classes used in this way are basically namespaces for what would otherwise be (pure) top-level functions.
From an architectural perspective there is no difference if you use pure top-level "global" functions or basic (*) pure static methods. Any pros or cons of one would equally apply to the other.
Static methods vs global functions
The main argument for using utility classes over global ("floating") functions is code organization, file and directory structure, and naming:
You might already have a convention for structuring class files in directories by namespace, but you might not have a good convention for top-level functions.
For version control (e.g. git) it might be preferable to have a separate file per function, but for other reasons it might be preferable to have them in the same file.
Your language might have an autoload mechanism for classes, but not for functions. (I think this would mostly apply to PHP)
You might prefer to write import Acme:::Url; Url::parse(url) over import function Acme:::parse_url; parse_url();. Or you might prefer the latter.
You should check if your language allows passing static methods and/or top-level functions as values. Perhaps some languages only allow one but not the other.
So it largely depends on the language you use, and conventions in your project, framework or software ecosystem.
(*) You could have private or protected methods in the utility class, or even use inheritance - something you cannot do with top-level functions. But most of the time this is not what you want.
Static methods/functions vs object methods
The main benefit of object methods is that you can inject the object, and later replace it with a different implementation with different behavior. Calling a static method directly works well if you don't ever need to replace it. Typically this is the case if:
the function is pure (no side effects, not influenced by internal or external state)
any alternative behavior would be considered as wrong, or highly strange. E.g. 1 + 1 should always be 2. There is no reason for an alternative implementation where 1 + 1 = 3.
You may also decide that the static call is "good enough for now".
And even if you start with static methods, you can make them injectable/pluggable later. Either by using function/callable values, or by having small wrapper classes with object methods that internally call the static method.
They're fine as long as you design them well ( That is, you don't have to change their signature from time to time).
These utility methods do not change that often, because they do one thing only. The problem comes when you want to tight a more complex object to another. If one of them needs to change or be replaced, it will be harder to to if you have them highly coupled.
Since these utility methods won't change that often I would say that is not much problem.
I think it would be worst if you copy/paste the same utility method over and over again.
This video How to design a good API and why it matters by Joshua Bloch, explains several concepts to bear in mind when designing an API ( that would be your utility library ). Although he's a recognized Java architect the content applies to all the programming languages.
Use them sparingly, you want to put as much logic as you can into your classes so they dont become just data containers.
But, at the same time you can't really avoid utilites, they are required sometimes.
In this case i think it's ok.
FYI there is the system.web.httputility class which contains alot of common http utilities which you may find useful.