How do you balance Framework/API Design and TDD - c#

We are building a framework that will be used by other developers and for now we have been using a lot of TDD practices. We have interfaces everywhere and have well-written unit tests that mock the interfaces.
However, we are now reaching the point where some of the properties/methods of the input classes need to be internal, and not visible to our framework users (for example object Id). The problem then is that we can't put those fields/methods on the interface as the interface does not describe accessibility.
We could:
Still use interfaces and upcast in the first line of the method, but that seems to defeat the purpose of interfaces.
Use classes as input parameters - breaking the TDD rule that everything should be interfaces
Provide another layer which does some translation between public interfaces and internal interfaces
Is there an existing pattern/approach to deal with this? What do the TDD people say should be done?

First, there is no general TDD rule that says everything should be an interface. This is coming from a specific style that is not practiced by every TDDer. See http://martinfowler.com/articles/mocksArentStubs.html
Second, you are experiencing the dichotomy of public vs. published. Our team "solved" this problem by introducing a #Published annotation that shows up in the API documentation. Eclipse uses naming conventions, as far as I know. I don't know of a really good solution to the problem, unfortunately.

You need to be able to replicate those internal methods in your mock up objects. And call them in the same way the real object would call them. Then you focus your unit test on the public method that relies on that private method you need to test. If these internal methods are calling other objects or doing a lot of work, you may need to refactor your design.
Good luck.

Sounds like you want your class to have a dependency injection. Search stackoverflow too. Then you can set this Id by your choice of either within constructor or through a setter.
[1l

Related

C# Forcing static fields [duplicate]

I am developing a set of classes that implement a common interface. A consumer of my library shall expect each of these classes to implement a certain set of static functions. Is there anyway that I can decorate these class so that the compiler will catch the case where one of the functions is not implemented.
I know it will eventually be caught when building the consuming code. And I also know how to get around this problem using a kind of factory class.
Just curious to know if there is any syntax/attributes out there for requiring static functions on a class.
Ed Removed the word 'interface' to avoid confusion.
No, there is no language support for this in C#. There are two workarounds that I can think of immediately:
use reflection at runtime; crossed fingers and hope...
use a singleton / default-instance / similar to implement an interface that declares the methods
(update)
Actually, as long as you have unit-testing, the first option isn't actually as bad as you might think if (like me) you come from a strict "static typing" background. The fact is; it works fine in dynamic languages. And indeed, this is exactly how my generic operators code works - it hopes you have the static operators. At runtime, if you don't, it will laugh at you in a suitably mocking tone... but it can't check at compile-time.
No. Basically it sounds like you're after a sort of "static polymorphism". That doesn't exist in C#, although I've suggested a sort of "static interface" notion which could be useful in terms of generics.
One thing you could do is write a simple unit test to verify that all of the types in a particular assembly obey your rules. If other developers will also be implementing the interface, you could put that test code into some common place so that everyone implementing the interface can easily test their own assemblies.
This is a great question and one that I've encountered in my projects.
Some people hold that interfaces and abstract classes exist for polymorphism only, not for forcing types to implement certain methods. Personally, I consider polymorphism a primary use case, and forced implementation a secondary. I do use the forced implementation technique fairly often. Typically, it appears in framework code implementing a template pattern. The base/template class encapsulates some complex idea, and subclasses provide numerous variations by implementing the abstract methods. One pragmatic benefit is that the abstract methods provide guidance to other developers implementing the subclasses. Visual Studio even has the ability to stub the methods out for you. This is especially helpful when a maintenance developer needs to add a new subclass months or years later.
The downside is that there is no specific support for some of these template scenarios in C#. Static methods are one. Another one is constructors; ideally, ISerializable should force the developer to implement the protected serialization constructor.
The easiest approach probably is (as suggested earlier) to use an automated test to check that the static method is implemented on the desired types. Another viable idea already mentioned is to implement a static analysis rule.
A third option is to use an Aspect-Oriented Programming framework such as PostSharp. PostSharp supports compile-time validation of aspects. You can write .NET code that reflects over the assembly at compile time, generating arbitrary warnings and errors. Usually, you do this to validate that an aspect usage is appropriate, but I don't see why you couldn't use it for validating template rules as well.
Unfortunately, no, there's nothing like this built into the language.
While there is no language support for this, you could use a static analysis tool to enforce it. For example, you could write a custom rule for FxCop that detects an attribute or interface implementation on a class and then checks for the existence of certain static methods.
The singleton pattern does not help in all cases. My example is from an actual project of mine. It is not contrived.
I have a class (let's call it "Widget") that inherits from a class in a third-party ORM. If I instantiate a Widget object (therefore creating a row in the db) just to make sure my static methods are declared, I'm making a bigger mess than the one I'm trying to clean up.
If I create this extra object in the data store, I've got to hide it from users, calculations, etc.
I use interfaces in C# to make sure that I implement common features in a set of classes.
Some of the methods that implement these features require instance data to run. I code these methods as instance methods, and use a C# interface to make sure they exist in the class.
Some of these methods do not require instance data, so they are static methods. If I could declare interfaces with static methods, the compiler could check whether or not these methods exist in the class that says it implements the interface.
No, there would be no point in this feature. Interfaces are basically a scaled down form of multiple inheritance. They tell the compiler how to set up the virtual function table so that non-static virtual methods can be called properly in descendant classes. Static methods can't be virtual, hence, there's no point in using interfaces for them.
The approach that gets you closer to what you need is a singleton, as Marc Gravell suggested.
Interfaces, among other things, let you provide some level of abstraction to your classes so you can use a given API regardless of the type that implements it. However, since you DO need to know the type of a static class in order to use it, why would you want to enforce that class to implement a set of functions?
Maybe you could use a custom attribute like [ImplementsXXXInterface] and provide some run time checking to ensure that classes with this attribute actually implement the interface you need?
If you're just after getting those compiler errors, consider this setup:
Define the methods in an interface.
Declare the methods with abstract.
Implement the public static methods, and have the abstract method overrides simply call the static methods.
It's a little bit of extra code, but you'll know when someone isn't implementing a required method.

Is it recommended to mock concrete class?

Most of the examples given in mocking framework website is to mock Interface. Let say NSubstitute that I'm currently using, all their mocking examples is to mock interface.
But in reality, I saw some developer mock concrete class instead. Is it recommended to mock concrete class?
In theory there is absolutely no problem mocking a concrete class; we are testing against a logical interface (rather than a keyword interface), and it does not matter whether that logical interface is provided by a class or interface.
In practice .NET/C# makes this a bit problematic. As you mentioned a .NET mocking framework I'm going to assume you're restricted to that.
In .NET/C# members are non-virtual by default, so any proxy-based methods of mocking behaviour (i.e. derive from the class, and override all the members to do test-specific stuff) will not work unless you explicitly mark the members as virtual. This leads to a problem: you are using an instance of a mocked class that is meant to be completely safe in your unit test (i.e. won't run any real code), but unless you have made sure everything is virtual you may end up with a mix of real and mocked code running (this can be especially problematic if there is constructor logic, which always runs, and is compounded if there are other concrete dependencies to be new'd up).
There are a few ways to work around this.
Use interfaces. This works and is what we advise in the NSubstitute documentation, but has the downside of potentially bloating your codebase with interfaces that may not actually be needed. Arguably if we find good abstractions in our code we'll naturally end up with neat, reusable interfaces we can test to. I haven't quite seen it pan out like that, but YMMV. :)
Diligently go around making everything virtual. An arguable downside to this is that we're suggesting all these members are intended to be extension points in our design, when we really just want to change the behaviour of the whole class for testing. It also doesn't stop constructor logic running, nor does it help if the concrete class requires other dependencies.
Use assembly re-writing via something like the Virtuosity add-in for Fody, which you can use to modify all class members in your assembly to be virtual.
Use a non-proxy based mocking library like TypeMock (paid), JustMock (paid), Microsoft Fakes (requires VS Ultimate/Enterprise, though its predecessor, Microsoft Moles, is free) or Prig (free + open source). I believe these are able to mock all aspects of classes, as well as static members.
A common complaint lodged against the last idea is that you are testing via a "fake" seam; we are going outside the mechanisms normally used for extending code to change the behaviour of our code. Needing to go outside these mechanisms could indicate rigidity in our design. I understand this argument, but I've seen cases where the noise of creating another interface/s outweighs the benefits. I guess it's a matter of being aware of the potential design issue; if you don't need that feedback from the tests to highlight design rigidity then they're great solutions.
A final idea I'll throw out there is to play around with changing the size of the units in our tests. Typically we have a single class as a unit. If we have a number of cohesive classes as our unit, and have interfaces acting as a well-defined boundary around that component, then we can avoid having to mock as many classes and instead just mock over a more stable boundary. This can make our tests a more complicated, with the advantage that we're testing a cohesive unit of functionality and being encouraged to develop solid interfaces around that unit.
Hope this helps.
Update:
3 years later I want to admit that I changed my mind.
In theory I still do not like to create interfaces just to facilitate creation of mock objects. In practice ( I am using NSubstitute) it is much easier to use Substitute.For<MyInterface>() rather than mock a real class with multiple parameters, e.g. Substitute.For<MyCLass>(mockedParam1, mockedParam2, mockedParam3), where each parameter should be mocked separately. Other potential troubles are described in NSubstitute documentation
In our company the recommended practice now is to use interfaces.
Original answer:
If you don't have a requirement to create multiple implementations of the same abstraction, do not create an interface.  
As it pointed by David Tchepak, you don't want to bloating your codebase with interfaces that may not actually be needed.
From http://blog.ploeh.dk/2010/12/02/InterfacesAreNotAbstractions.aspx
Do you extract interfaces from your classes to enable loose
coupling? If so, you probably have a 1:1 relationship between your
interfaces and the concrete classes that implement them.
That’s probably not a good sign, and violates the Reused Abstractions
Principle (RAP).
Having only one implementation of a given interface is a code smell.
If your target is the testability, i prefer  the second option from David Tchepak's answer above.
However I am not convinced that you have to make everything virtual. It's sufficient to make virtual only the methods, that you are going to substitute.
I also will add a comment next to the method declaration that method is virtual only to make it substitutable for unit test mocking.
However note that substitution of concrete classes instead of interfaces has some limitations.
E.g. for NSubstitute
Note: Recursive substitutes will not be created for classes, as
creating and using classes can have potentially unwanted side-effects
.
The question is rather: Why not?
I can think of a couple of scenarios where this is useful, like:
Implementation of a concrete class is not yet complete, or the guy who did it is unreliable. So I mock the class as it is specified and test my code against it.
It can also be useful to mock classes that do things like database access. If you don't have a test database you might want to return values for your tests that are always constant (which is easy by mocking the class).
Its not that it is recommended, it's that you can do this if you have no other choice.
Usually well designed project rely on defining interfaces for your separate components so you can tests each of them in isolation by mocking the other ones. But if you are working with legacy code /code that you are not allowed to change and still want to test your classes then you have no choice and you cannot be criticized for it (assuming you made the effort to try to switch these components to interfaces and were denied the right to).
Supposed we have:
class Foo {
fun bar() = if (someCondition) {
“Yes”
} else {
“No”
}
}
There’s nothing preventing us to do the following mocking in the test code:
val foo = mock<Foo>()
whenever(foo.bar()).thenReturn(“Maybe”)
The problem is it is setting up incorrect behavior of class Foo. The real instance of class Foo will never be able to return “Maybe”.

How to model apps using declarations (interfaces) only?

This is something I really like doing after I gave the problem some thought. So just creating classes, enums, interfaces, structs, etc to define interfaces in the sense of programming.
But when doing this, obviously the compiler complains because I have methods, etc all around with no code inside, so methods that need to return values, etc are flagged.
Now you could say, why compile then? But to me being able to compile and see that your interfaces is compiling is the an important step. Then when you are satisfied, you can add the missing implementations and test your changes.
So my question is, do you do this? If so, how?
What do you think are the pros and cons of this style? Also is there a name for this style of programming?
Notice that though this is different than some other more commonly used way of programming (from what I have seen), where the programmer, starts implementing right away and as he needs more types, etc, he adds them right away or after. But always going forward with implementations.
You can compile with all interfaces, no problem.
You can compile structs and classes that use automatic properties ({ get; set; }) and empty void methods.
Methods with return values can be compiled if they throw an exception. For this purpose, throw NotImplementedException. No return statements needed.
By the time you deploy, there should not be any NotImplementedExceptions; any members that do not have implementations by design should instead throw NotSupportedException.
why?
I've not personally tried to model an entire solution this way. I typically start with an interface, write tests, and then implement it. As I discover the need for dependencies, I write interfaces so that I can mock them in my tests.
Once that class is implemented using only interfaces, I begin to write tests for and implement the interfaces that I had to write to support the first class, then repeat this process until I've implemented everything.
This is extremely similar to what's normally called "mocking". The one difference is that in mocking, a mock class is created specifically for test purposes. The mock object doesn't even attempt to carry out the functions of the real object. In some cases it just includes enough of a body to let the code compile so you can play with the interface. More often, it includes some test code to verify the interface (e.g., check that requirements of the real interface are being met, even though it does nothing else with the values). Just for a really trivial example, a mock sqrt routine might simply verify that its argument is >= 0.0.
It's quite easy to do with the built-in class designer in VS.
When adding classes/methods/properties/... it generates compilable code stubs for everything.
The pretty picture is bonus.
This quit usable when designing (part of) the Model. You don't want to use this for GUI or DAL layers.

Interface design? Can I do it iteratively? How should I handle changes to the interface?

What is the best approach for defining Interfaces in either C# or Java? Do we need to make generic or add the methods as and when the real need arises?
Regards,
Srinivas
Once an interface is defined, it is intended to not be changed.
You have to be thoughtful about the purpose of the interface and be as complete as possible.
If you find the need, later, to add a method, really you should define a new interface, possibly a _V2 interface, with the additional method.
Addendum: Here you will find some good guidelines on interface design in C#, as part of a larger, valuable work on C# design in general. It generally applies to Java as well.
Excerpts:
Although most APIs are best modeled using classes and structs, there are cases in which interfaces are more appropriate or are the only option.
DO provide at least one type that is
an implementation of an interface.
This helps to validate the design of
the interface. For example,
System.Collections.ArrayList is an
implementation of the
System.Collections.IList interface.
DO provide at least one API consuming
each interface you define (a method
taking the interface as a parameter or
a property typed as the interface).
This helps to validate the interface
design. For example, List.Sort
consumes IComparer interface.
DO NOT add members to an interface that
has previously shipped. Doing so
would break implementations of the
interface. You should create a new
interface to avoid versioning
problems.
I recommend relying on the broad type design guidelines.
To quote Joshua Bloch:
When in doubt, leave it out.
You can always add to an interface later. Once a member is a part of your interface it is very difficult to change or remove it. Be very conservative in your creation of you interfaces as they are binding contracts.
As a side note here is an excellent interview with Vance Morrison (of the Microsoft CLR team) in which he mentions the possibility of a future version of the CLR allowing "mixins" or interfaces with default implementations for their members.
If your interface is part of code that is shared with other projects and teams, listen to Cheeso. But if your interface is part of a private project and you have access to all the change points then you probably didn't need interfaces to begin with but go ahead and change them.
If the interface is going to be public, I feel that a good deal of care needs to be put into the design because changes to the interface is going to be difficult if a lot of code is going to suddenly break in the next iteration.
Changes to the interface needs to be taken with care, therefore, it would be ideal if changes wouldn't have to be made after the initial release. This means, that the first iteration will be very important in terms of the design.
However, if changes are required, one way to implement the changes to the interface would be deprecate the old methods, and provide a transition path for old code to use the newly-designed features. This does mean that the deprecated methods will still stick around to prevent the code using the old methods from breaking -- this is not ideal, so it is a "price to pay" for not getting it right the first time around.
On a related matter, yesterday, I stumbled upon the Google Tech Talk: How to Design a Good API and Why It Matters by Joshua Bloch. He was behind the design and implementation of the Java Collection libraries and such, and is the author of Effective Java.
The video is about an hour long where he goes into details and examples about what makes a good API, why we should be making well-designed APIs, and more. It's a good video to watch to get some ideas and inspiration for certain things to look out for when thinking about designing APIs.
Adding methods later to an interface immediately breaks all implementations of the interface that didn't accidentaly implement those methods. For that reason, make sure your interface specification is complete. I'd propose you start with a (sample) client of the interface, the part that actually uses instances of classes implementing said interface. Whatever the client needs must be part of the interface (obviously). Then make a (sample) implementation of the interface and look what additional methods are both generally usefull and available (in possible other implementations) so they should also be part of the interface. Check for symetry completeness (e.g. if there is an "openXYZ", there should also be a "closeXYZ". if there is an "addFooBar", there should be a "removeFooBar". etc.)
If possible, let a coworker check your specification.
And: Be sure you really want an interface. Maybe an abstract base class is a better fit for your needs.
Well, it really depends on your particular situation. If your team is the sole user/maintainer of that interface, then by all means, modify it as you see fit and forget all about that "best practice blabla" kind of stuff. It is YOUR code after all... Never blindly follow best pracice stuff without understanding its rationale.
Now, if you're making a public API that other team or customer, will work with (think plugins, extension points or things like that) then you have to be conservative with what you put in the interface. As other mentionned, you may have to add _V2 kind of interface int these cases. Microsoft did with several web browser COM interfaces.
The guidelines Microsoft publishes in Framework Design Guidelines are just that: guideline for PUBLIC interface. Not for private internal stuff; tough many of them still apply. Know what applies or not to your situation.
No rule will make up for lack of common sense.

Is there a way to force a C# class to implement certain static functions?

I am developing a set of classes that implement a common interface. A consumer of my library shall expect each of these classes to implement a certain set of static functions. Is there anyway that I can decorate these class so that the compiler will catch the case where one of the functions is not implemented.
I know it will eventually be caught when building the consuming code. And I also know how to get around this problem using a kind of factory class.
Just curious to know if there is any syntax/attributes out there for requiring static functions on a class.
Ed Removed the word 'interface' to avoid confusion.
No, there is no language support for this in C#. There are two workarounds that I can think of immediately:
use reflection at runtime; crossed fingers and hope...
use a singleton / default-instance / similar to implement an interface that declares the methods
(update)
Actually, as long as you have unit-testing, the first option isn't actually as bad as you might think if (like me) you come from a strict "static typing" background. The fact is; it works fine in dynamic languages. And indeed, this is exactly how my generic operators code works - it hopes you have the static operators. At runtime, if you don't, it will laugh at you in a suitably mocking tone... but it can't check at compile-time.
No. Basically it sounds like you're after a sort of "static polymorphism". That doesn't exist in C#, although I've suggested a sort of "static interface" notion which could be useful in terms of generics.
One thing you could do is write a simple unit test to verify that all of the types in a particular assembly obey your rules. If other developers will also be implementing the interface, you could put that test code into some common place so that everyone implementing the interface can easily test their own assemblies.
This is a great question and one that I've encountered in my projects.
Some people hold that interfaces and abstract classes exist for polymorphism only, not for forcing types to implement certain methods. Personally, I consider polymorphism a primary use case, and forced implementation a secondary. I do use the forced implementation technique fairly often. Typically, it appears in framework code implementing a template pattern. The base/template class encapsulates some complex idea, and subclasses provide numerous variations by implementing the abstract methods. One pragmatic benefit is that the abstract methods provide guidance to other developers implementing the subclasses. Visual Studio even has the ability to stub the methods out for you. This is especially helpful when a maintenance developer needs to add a new subclass months or years later.
The downside is that there is no specific support for some of these template scenarios in C#. Static methods are one. Another one is constructors; ideally, ISerializable should force the developer to implement the protected serialization constructor.
The easiest approach probably is (as suggested earlier) to use an automated test to check that the static method is implemented on the desired types. Another viable idea already mentioned is to implement a static analysis rule.
A third option is to use an Aspect-Oriented Programming framework such as PostSharp. PostSharp supports compile-time validation of aspects. You can write .NET code that reflects over the assembly at compile time, generating arbitrary warnings and errors. Usually, you do this to validate that an aspect usage is appropriate, but I don't see why you couldn't use it for validating template rules as well.
Unfortunately, no, there's nothing like this built into the language.
While there is no language support for this, you could use a static analysis tool to enforce it. For example, you could write a custom rule for FxCop that detects an attribute or interface implementation on a class and then checks for the existence of certain static methods.
The singleton pattern does not help in all cases. My example is from an actual project of mine. It is not contrived.
I have a class (let's call it "Widget") that inherits from a class in a third-party ORM. If I instantiate a Widget object (therefore creating a row in the db) just to make sure my static methods are declared, I'm making a bigger mess than the one I'm trying to clean up.
If I create this extra object in the data store, I've got to hide it from users, calculations, etc.
I use interfaces in C# to make sure that I implement common features in a set of classes.
Some of the methods that implement these features require instance data to run. I code these methods as instance methods, and use a C# interface to make sure they exist in the class.
Some of these methods do not require instance data, so they are static methods. If I could declare interfaces with static methods, the compiler could check whether or not these methods exist in the class that says it implements the interface.
No, there would be no point in this feature. Interfaces are basically a scaled down form of multiple inheritance. They tell the compiler how to set up the virtual function table so that non-static virtual methods can be called properly in descendant classes. Static methods can't be virtual, hence, there's no point in using interfaces for them.
The approach that gets you closer to what you need is a singleton, as Marc Gravell suggested.
Interfaces, among other things, let you provide some level of abstraction to your classes so you can use a given API regardless of the type that implements it. However, since you DO need to know the type of a static class in order to use it, why would you want to enforce that class to implement a set of functions?
Maybe you could use a custom attribute like [ImplementsXXXInterface] and provide some run time checking to ensure that classes with this attribute actually implement the interface you need?
If you're just after getting those compiler errors, consider this setup:
Define the methods in an interface.
Declare the methods with abstract.
Implement the public static methods, and have the abstract method overrides simply call the static methods.
It's a little bit of extra code, but you'll know when someone isn't implementing a required method.

Categories