NUnit has an IConstraint interface (documentation here and code here). It seems to me that reusing this type in my core project for validation purposes makes sense.
Are there unforseen side effects I have not yet recognized? Would you reuse the IConstraint type in your core project? why/why not?
This is more an opinion-based question. Beside that, there are two issues that come to my mind.
Firstly, you can write something like Assert.That(foo, Is.EqualTo(bar)), which internally invokes an EqualConstraint. To have your custom constraint to be usable like this, you have to "overload" Is, so you can have Assert.That(foo, Is.AsGoodAs(bar)) (where AsGoodAs is your custom constraint invocation). See NUnit's Custom Constraints documentation for details. With this you will have two classes with the name Is (yours and the NUnit one) and you will also call the default static methods like EqualTo via a derived type. Resharper will warn you about this.
Secondly, writing intelligent assertion failure texts (like expected "this", but was "that") can be a bit tricky to figure out. You will certainly spend some time on this until you get what you want. Of course this depends on your personal feelings about nice texts.
Related
In a new line of work I have been told to avoid using Extension Methods for Types that you (or your organization) have no control over, meaning external libraries, framework types such as string, list, and others.
The argument I was given is that this will be bad if the framework developer now decides to implement a method that has the same names and/or parameters as your extension method.
While this problem may arise, this argument effectively reduces the usability of extension methods to nearly zero. Would this argument be considered valid? I am not suggesting to use extension methods everywhere, but I would like to know of similar arguments for and against it.
It's an argument which has some merit, but in lots of cases the problems can be avoided:
If you control your code and can easily update it if necessary, then you can easily write unit tests to detect the problem and correct it if it occurs. (I assume you'd be validating any update to the external library before deploying it anyway.)
If you trust the external library developer to follow normal good practices for backward compatibility, then they shouldn't be adding members to interfaces anyway, as that would break existing implementations... so you could at least write extension methods for interfaces.
If your extension methods have names which would be very unlikely to be added to the external libraries, then practically speaking it's not an issue. For example, if your company writes Frobulators, and that's a term which is specific to you, then writing
public static Frobulator ToFrobulator(this string Frobulator)
really isn't going to be a problem in reality.
Of course, the risk is there, but defining something on closed or sealed types you have no control over is exactly what extension methods is about. If you'd only create extension methods on types from your own, the effectiveness of extension methods (compared to regular methods) would be minimized.
There is a very easy 'solution' for this in naming conventions. If you'd prefix or postfix your extensions with a specific identifier, you will be sure Microsoft doesn't create a similar method.
I have some integrations (like Salesforce) that I would like to hide behind a product-agnostic wrapper (like a CrmService class instead of SalesforceService class).
It seems simple enough that I can just create a CrmService class and use the SalesforceService class as an implementation detail in the CrmService, however, there is one problem. The SalesforceService uses some exceptions and enums. It would be weird if my CrmService threw SalesforceExceptions or you were required to use Salesforce enums.
Any ideas how I can accomplish what I want cleanly?
EDIT: Currently for exceptions, I am catching the Salesforce one and throwing my own custom one. I'm not sure what I should do for the enums though. I guess I could map the Salesforce enums to my own provider-agnostic ones, but I'm looking for a general solution that might be cleaner than having to do this mapping. If that is my only option (to map them), then that is okay, just trying to get ideas.
The short answer is that you are on the right track, have a read through the Law of Demeter.
The fundamental notion is that a given object should assume as
little as possible about the structure or properties of anything else
(including its subcomponents), in accordance with the principle of
"information hiding".
The advantage of following the Law of Demeter is that the resulting
software tends to be more maintainable and adaptable. Since objects
are less dependent on the internal structure of other objects, object
containers can be changed without reworking their callers.
Although it may also result in having to write many wrapper
methods to propagate calls to components; in some cases, this can
add noticeable time and space overhead.
So you see you are following quite a good practise which I do generally follow myself, but it does take some effort.
And yes you will have to catch and throw your own exceptions and map enums, requests and responses, its a lot of upfront effort but if you ever have to change out Salesforce in a few years you will be regarded a hero.
As with all things software development, you need to way up the effort versus the benefit you will gain, if you think you are likely never to change out salesforce? then is it really needed? ... for you to decide.
To make use of good OOP practices, I would create a small interface ICrm with the basic members that all your CRM's have in common. This interface will include the typical methods like MakePayment(), GetPayments(), CheckOrder(), etc. Also create the Enums that you need like OrderStatus or ErrorType, for example.
Then create and implement your specific classes implementing the interface, e.g. class CrmSalesForce : ICrm. Here you can convert the specific details to this particular CRM (SalesForce in that case) to your common ICrm. Enums can be converted to string and the other way around if you have to (http://msdn.microsoft.com/en-us/library/kxydatf9(v=vs.110).aspx).
Then, as a last step, create your CrmService class and use in it Dependency Injection (http://msdn.microsoft.com/en-us/library/ff921152.aspx), that's it, pass a type of ICrm as a parameter in its constructor (or methods if you prefer to) . That way you keep your CrmService class quite cohesive and independent, so you create and use different Crm's without the need to change most of your code.
I'd like to have a test that verifies that, for each use of Automapper.Mapper.Map<T1,T2>(), that there is a corresponding mapping configuration (AutoMapper.Mapper.CreateMap<T1,T2>()) in my Bootstrapper.
I was just about to go down the road of using Roslyn to interface with the compiler and find all usages of the Map<> method and then try to map using those instances. Although that would do the trick, I think I'd rather use something that already exists.
Does this exist? If not, is there a better way to do this than with Roslyn?
You're treading too deep into meta-programming.
Best thing you can do is to contain your mapped classes to one or several namespaces, and check that there are mappings for all classes in those namespaces. For this you won't need Roslyn, Cecil or any such thing.
If you're abandoning compile-time checks, at least you have to put in place some conventions, and if your conventions are well defined, you can verify them.
Problem is you can't be sure - there are loads of cases where it's not deterministic (e.g. a model of type Y might be passed to AutoMapper downcast to an object.
You should provide coverage of the actual consumption of the mappings as part of your normal code coverage.
Whether that means unit tests for every mapping method is a different question...
When it comes to extension methods class names seem to do nothing, but provide a grouping which is what name-spaces do. As soon as I include the namespace I get all the extension methods in the namespace. So my question comes down to this: Is there some value I can get from the extension methods being in the static class?
I realize it is a compiler requirement for them to be put into a static class, but it seems like from an organizational perspective it would be reasonable for it to be legal to allow extension methods to be defined in name-spaces without classes surrounding them. Rephrasing the above question another way: Is there any practical benefit or help in some scenario I get as a developer from having extension methods attached to the class vs. attached to the namespace?
I'm basically just looking to gain some intuition, confirmation, or insight - I suspect it's may be that it was easiest to implement extension methods that way and wasn't worth the time to allow extension methods to exist on their own in name-spaces.
Perhaps you will find a satisfactory answer in Eric Lippert's blog post Why Doesn't C# Implement "Top Level" Methods? (in turn prompted by SO question Why C# is not allowing non-member functions like C++), whence (my emphasis):
I am asked "why doesn't C# implement feature X?" all the time. The
answer is always the same: because no one ever designed, specified,
implemented, tested, documented and shipped that feature. All six of
those things are necessary to make a feature happen. All of them cost
huge amounts of time, effort and money. Features are not cheap, and we
try very hard to make sure that we are only shipping those features
which give the best possible benefits to our users given our
constrained time, effort and money budgets.
I understand that such a general answer probably does not address the
specific question.
In this particular case, the clear user benefit was in the past not
large enough to justify the complications to the language which would
ensue. By restricting how different language entities nest inside each
other we (1) restrict legal programs to be in a common, easily
understood style, and (2) make it possible to define "identifier
lookup" rules which are comprehensible, specifiable, implementable,
testable and documentable.
By restricting method bodies to always be inside a struct or class, we make it easier to reason about the meaning of an unqualified
identifier used in an invocation context; such a thing is always an
invocable member of the current type (or a base type).
To me putting them in the class is all about grouping related functions inside a class. You may have a number of extension methods in the same namespace. If I wanted to write some extension methods for the DirectoryInfo and FileInfo classes I would create two classes in an IO namespace called DirectoryInfoExtensions and FileInfoExtensions.
You can still call the extension methods like you would any other static method. I dont know how the compiler works but perhaps the output assembly if compiled for .net 2 can still be used by legacy .net frameworks. It also means the existing reflection library can work and be used to run extension methods without any changes. Again I am no compiler expert but I think the "this" keyword in the context of an extension method is to allow for syntactical sugar that allows us to use the methods as though they belong to the object.
The .NET Framework requires that every method exist in a class which is within an assembly. A language could allow methods or fields to be declared without an explicitly-specified enclosing class, place all such methods in assembly Fnord into a class called Fnord_TopLevelDefault, and then search the Fnord_TopLevelDefault class of all assemblies when performing method lookup; the CLS specification would have to be extended for this feature to work smoothly for mixed-language projects, however. As with extension methods, such behavior could be CLS compliant if the CLS didn't acknowledge it, since code in a language which didn't use such a feature could use a "free-floating" method Foo in assembly Fnord by spelling it Fnord_TopLevelDefault.Foo, but that would be a bit ugly.
A more interesting question is the extent to which allowing an extension method Foo to be invoked from an arbitrary class without requiring a clearly visible reference to that class is less evil than would be allowing a non-extension static methods to be likewise invoked. I don't think Math.Sqrt(x) is really more readable than Sqrt; even if one didn't want to import Math everywhere, being able to do so at least locally could in some cases improve code legibility considerably.
They can reference other static class members internally.
You should not only consider the consumer side aspect, but also the code maintenance aspect.
Even though intellisense doesn't distinguish with respect to the owner class, the information is still there through tool tips and whatever productivity tools you have added to your IDE. This can easily be used to provide some context for the method in what otherwise would be a flat (and sometimes very long) list.
Consumer wise, bottom line, I do not think it matters much.
This is something I really like doing after I gave the problem some thought. So just creating classes, enums, interfaces, structs, etc to define interfaces in the sense of programming.
But when doing this, obviously the compiler complains because I have methods, etc all around with no code inside, so methods that need to return values, etc are flagged.
Now you could say, why compile then? But to me being able to compile and see that your interfaces is compiling is the an important step. Then when you are satisfied, you can add the missing implementations and test your changes.
So my question is, do you do this? If so, how?
What do you think are the pros and cons of this style? Also is there a name for this style of programming?
Notice that though this is different than some other more commonly used way of programming (from what I have seen), where the programmer, starts implementing right away and as he needs more types, etc, he adds them right away or after. But always going forward with implementations.
You can compile with all interfaces, no problem.
You can compile structs and classes that use automatic properties ({ get; set; }) and empty void methods.
Methods with return values can be compiled if they throw an exception. For this purpose, throw NotImplementedException. No return statements needed.
By the time you deploy, there should not be any NotImplementedExceptions; any members that do not have implementations by design should instead throw NotSupportedException.
why?
I've not personally tried to model an entire solution this way. I typically start with an interface, write tests, and then implement it. As I discover the need for dependencies, I write interfaces so that I can mock them in my tests.
Once that class is implemented using only interfaces, I begin to write tests for and implement the interfaces that I had to write to support the first class, then repeat this process until I've implemented everything.
This is extremely similar to what's normally called "mocking". The one difference is that in mocking, a mock class is created specifically for test purposes. The mock object doesn't even attempt to carry out the functions of the real object. In some cases it just includes enough of a body to let the code compile so you can play with the interface. More often, it includes some test code to verify the interface (e.g., check that requirements of the real interface are being met, even though it does nothing else with the values). Just for a really trivial example, a mock sqrt routine might simply verify that its argument is >= 0.0.
It's quite easy to do with the built-in class designer in VS.
When adding classes/methods/properties/... it generates compilable code stubs for everything.
The pretty picture is bonus.
This quit usable when designing (part of) the Model. You don't want to use this for GUI or DAL layers.