when should i create a private method in c# - c#

Since following DI and TDD, I'm bit confused as to when should I create a private method. Could you please tell me what should be the rules of thumb to be considered while making a method private keeping testability and Dependency injection in mind?
I believe an example might help here:
Assume I have an interface with 3 methods like the following:
public interface IWordFrequencyAnalyzer
{
int CalculateHighestFrequency(string forText);
int CalculateFrequencyForWord(string text, string word);
IList<IWordFrequency> CalculateMostFrequentNWords(
string text, int n);
}
Now, I can write a class which can implement a private method which takes a string and can compute the frequency of words in it, and later in each public method I can do a manipulation according to it's requirement.In this case I'll be able to test the contract.
OR
I can extract that private method into a seperate class say something like WordProcessor which implements IWordProcessor, with a single public method which splits the sentence into words and pass it as dependency to the implementation of IWordFrequencyAnalyzer. This way the implementation of splitting the words is testable as well.
Which approach will you suggest?
Thanks,
-Mike

Since getting more into DI and TDD I ended up using private methods less and less, but the reason was not because I needed them to be public for tests. It's more because, as a by-product of using DI, I'm learning more about applying the SOLID principles to my code and this (in turn) is leading me to write classes with less methods overall and almost none private.
So, let's say you have a piece of your code you're using throughout various methods of your class. You know about DRY and you refactor it out to a private method and all's good. Except that often you realize you can generalize what that private method does and inject that functionality as an external dependency of the class: this way, you can test it and mock it, sure, but above all you can reuse what that method does even in other classes or other projects if needs be. Moving it out of the original class is an application of the single responsibility principle.
As I said this change in the way I code is not directly depending on the fact I use TDD or DI, but it's a by-product of the principles TDD encourages me to enforce and of the convenience that DI containers provide in composing the many small classes that result from this approach.
EDIT: The second approach in the example you added to your question is a good example of what I was talking about. The fact your new WordProcessor class is now testable is a plus, but the fact it's now composable and reusable is the real benefit.

Your methods should be private unless they need to be public for your application, not your test.
Generally you shouldn't make things public just to support unit testing (internal might help there). And in that case, you should generally still be testing public interfaces, not private details of the class, which are much more likely to change and break your test.
If you have access to a copy, Roy Osherove's "The Art Of Unit Testing" addresses this issue pretty well

You specific methods as private when they are only used internally by the object, normally called from other methods, some of which will be public or protected. You are more likely to create private methods as a result of following the DRY (Don't Repeat Yourself) principles. In that scenario you'd extract some common code used by several methods into a private method called by those methods.

Making every method of a class public and write a unit test for every method on such class will create a maintenance nightmare quickly, because you will have to change your tests for every little refactoring in your production code. You want to test the behavior of the class itself and for this you don't need every method to be public.

You should keep your methods private always when it is possible. Don't change them to public only for unit tests. When you see that it is hard to test it then consider to change a little bit architecture of your class. From my experience usually when I needed to test private method the class had wrong responsibilities so I changed it.

The Single Responsibility Policy will help you in this matter. Consider this example:
internal class Boss
{
private bool _notEnoughStaff;
private IList<Employee> _staff;
public Boss(bool notEnoughStaff)
{
_notEnoughStaff = notEnoughStaff;
}
public void GiveOrders()
{
if (_notEnoughStaff)
HireStaff();
foreach (Employee employee in _staff)
{
employee.DoWork();
}
}
private void HireStaff()
{
_staff.Add(new Employee());
}
}
public class Employee
{
public void DoWork()
{
}
}
In this case I see not one but two responsibilities: The Boss delegates work AND hire new staff. In this example, I would always extract the private method HireStaff to a new class (let's call it HR), and inject this into the Boss class.
This is a very simplified example, but as you get more and more experienced in the TDD way of thinking, you will find that not many private methods have legitimacy.
Regards,
Morten

Related

Using optional singletons in OOP?

I'm writing a PCL in .NET and I have a wrapper class around HttpClient that loads an HtmlAgilityPack.HtmlDocument from a URI in multiple different methods. It is stateless, so I would really like to make it static, since in my opinion instantiating something with new gives the impression that it contains state. However, I have a couple of interfaces that I want it to inherit from, so it can't be static. This is where I thought of making it a singleton. Here are a few snippets from the code:
public class ConcurrentClient : IAsyncClient<HtmlDocument>
{
private static readonly ConcurrentClient _Instance = new ConcurrentClient();
private ConcurrentClient() { }
public static ConcurrentClient Instance
{
get { return _Instance; }
}
public HtmlDocument LoadUri(string uri)
{
return LoadUriAsync(uri).Result;
}
// ...
public async Task<HtmlDocument> LoadUriAsync(string uri,
Encoding e, NetworkCredential creds, Action<HtmlDocument> prehandler)
{
// ...
}
}
I'm wondering, though, if I should change the beginning parts to this:
private static readonly ConcurrentClient _SharedInstance = new ConcurrentClient();
public static ConcurrentClient SharedInstance
{
get { return _SharedInstance; }
}
The reason for this is I'm not that sure about using the Singleton pattern, mainly because I've rarely seen it in use in other libraries (maybe WinRT's Application.Current?), and I think it would encourage users of my PCL to write coupled code, since it's much easier to just call ConcurrentClient.Instance everywhere than it is to pass it in as a parameter.
However, I do want to encourage the use of a shared instance because excluding the reasons above, there's very little point in calling new ConcurrentClient() because all it does is create more memory overhead. Also, I can't think of a better way to implement inheritance with methods that don't really rely on state.
Your Singleton already implements 2 interfaces. The question really is, where are the dependencies to this Singleton and why are they there ?
If the answer is that these dependencies are there because they need to get to the implementation of those interfaces then I would say that this is wrong.
The whole point of doing a SOLID design is to have dependencies towards interfaces and not towards any concrete implementation. So anyone who needs any of these 2 interfaces should be given those interfaces via dependency injection. So that means that the interfaces would be passed by their constructor or via an extra parameter in a method call, a strategy pattern, ...
Also see this : http://blogs.msdn.com/b/scottdensmore/archive/2004/05/25/140827.aspx
There can be a reason to make a singleton, but based on your explanation this is not that clear.
Investigate more of your time in using dependency injection. If you have that under control move one step further and investigate on how you can use an inversion of control container.
Also, it's easy to forget DI and passing around the object as a parameter when you can just access it by Singleton.Instance.
You are forgetting about unit testing. If you pass interfaces to your class constructors you can easily mock those interfaces and test your class functionality. With your singleton in place, your classes really need that singleton. Unit testing will be harder.
Of course Instance is easy to access, it's a global and since people revert back to old habits of programming towards objects all the time, that is why it is so popular.

Making a private method public to unit test it...good idea?

Moderator Note: There are already 39 answers posted here (some have been deleted). Before you post your answer, consider whether or not you can add something meaningful to the discussion. You're more than likely just repeating what someone else has already said.
I occasionally find myself needing to make a private method in a class public just to write some unit tests for it.
Usually this would be because the method contains logic shared between other methods in the class and it's tidier to test the logic on its own, or another reason could be possible be I want to test logic used in synchronous threads without having to worry about threading problems.
Do other people find themselves doing this, because I don't really like doing it?? I personally think the bonuses outweigh the problems of making a method public which doesn't really provide any service outside of the class...
UPDATE
Thanks for answers everyone, seems to have piqued peoples' interest. I think the general consensus is testing should happen via the public API as this is the only way a class will ever be used, and I do agree with this. The couple of cases I mentioned above where I would do this above were uncommon cases and I thought the benefits of doing it was worth it.
I can however, see everyones point that it should never really happen. And when thinking about it a bit more I think changing your code to accommodate tests is a bad idea - after all I suppose testing is a support tool in a way and changing a system to 'support a support tool' if you will, is blatant bad practice.
Note:
This answer was originally posted for the question Is unit testing alone ever a good reason to expose private instance variables via getters? which was merged into this one, so it may be a tad specific to the usecase presented there.
As a general statement, I'm usually all for refactoring "production" code to make it easier to test. However, I don't think that would be a good call here. A good unit test (usually) shouldn't care about the class' implementation details, only about its visible behavior. Instead of exposing the internal stacks to the test, you could test that the class returns the pages in the order you expect it to after calling first() or last().
For example, consider this pseudo-code:
public class NavigationTest {
private Navigation nav;
#Before
public void setUp() {
// Set up nav so the order is page1->page2->page3 and
// we've moved back to page2
nav = ...;
}
#Test
public void testFirst() {
nav.first();
assertEquals("page1", nav.getPage());
nav.next();
assertEquals("page2", nav.getPage());
nav.next();
assertEquals("page3", nav.getPage());
}
#Test
public void testLast() {
nav.last();
assertEquals("page3", nav.getPage());
nav.previous();
assertEquals("page2", nav.getPage());
nav.previous();
assertEquals("page1", nav.getPage());
}
}
Personally, I'd rather unit test using the public API and I'd certainly never make the private method public just to make it easy to test.
If you really want to test the private method in isolation, in Java you could use Easymock / Powermock to allow you to do this.
You have to be pragmatic about it and you should also be aware of the reasons why things are difficult to test.
'Listen to the tests' - if it's difficult to test, is that telling you something about your design? Could you refactor to where a test for this method would be trivial and easily covered by testing through the public api?
Here's what Michael Feathers has to say in 'Working Effectively With Legacy Code"
"Many people spend a lot of time trying ot figure out how to get around this problem ... the real answer is that if you have the urge to test a private method, the method shouldn't be private; if making the method public bothers you, chances are, it is because it is part of a separate reponsibility; it should be on another class." [Working Effectively With Legacy Code (2005) by M. Feathers]
As others have said, it is somewhat suspect to be unit testing private methods at all; unit test the public interface, not the private implementation details.
That said, the technique I use when I want to unit test something that is private in C# is to downgrade the accessibility protection from private to internal, and then mark the unit testing assembly as a friend assembly using InternalsVisibleTo. The unit testing assembly will then be allowed to treat the internals as public, but you don't have to worry about accidentally adding to your public surface area.
Lots of answers suggest only testing the public interface, but IMHO this is unrealistic - if a method does something that takes 5 steps, you'll want to test those five steps separately, not all together. This requires testing all five methods, which (other than for testing) might otherwise be private.
The usual way of testing "private" methods is to give every class its own interface, and make the "private" methods public, but not include them in the interface. This way, they can still be tested, but they don't bloat the interface.
Yes, this will result in file- and class-bloat.
Yes, this does make the public and private specifiers redundant.
Yes, this is a pain in the ass.
This is, unfortunately, one of the many sacrifices we make to make code testable. Perhaps a future language (or a even a future version of C#/Java) will have features to make class- and module-testability more convenient; but in the meanwhile, we have to jump through these hoops.
There are some who would argue that each of those steps should be its own class, but I disagree - if they all share state, there is no reason to create five separate classes where five methods would do. Even worse, this results in file- and class-bloat. Plus, it infects the public API of your module - all those classes must necessarily be public if you want to test them from another module (either that, or include the test code in the same module, which means shipping the test code with your product).
A unit test should test the public contract, the only way how a class could be used in other parts of the code. A private method is implementation details, you should not test it; as far as public API works correctly, the implementation doesn't matter and could be changed without changes in test cases.
IMO, you should write your tests not making deep assumptions on how your class implemented inside. You probably want to refactor it later using another internal model but still making the same guarantees that previous implementation gives.
Keeping that in mind I suggest you to focus on testing that your contract is still holds no matter what internal implementation your class currently have. Property based testing of your public APIs.
How about making it package private? Then your test code can see it (and other classes in your package as well), but it is still hidden from your users.
But really, you should not be testing private methods. Those are implementation details, and not part of the contract. Everything they do should be covered by calling the public methods (if they have code in there that is not exercised by the public methods, then that should go). If the private code is too complex, the class is probably doing too many things and in want of refactoring.
Making a method public is big commitment. Once you do that, people will be able to use it, and you cannot just change them anymore.
Update: I have added a more expanded and more complete answer to this question in numerous other places. This is can be found on my blog.
If I ever need to make something public to test it, this usually hints that the system under test is not following the Single Reponsibility Principle. Hence there is a missing class that should be introduced. After extracting the code into a new class, make it public. Now you can test easily, and you are following SRP. Your other class simply has to invoke this new class via composition.
Making methods public/using langauge tricks such as marking code as visible to test assembilies should always be a last resort.
For example:
public class SystemUnderTest
{
public void DoStuff()
{
// Blah
// Call Validate()
}
private void Validate()
{
// Several lines of complex code...
}
}
Refactor this by introducing a validator object.
public class SystemUnderTest
{
public void DoStuff()
{
// Blah
validator.Invoke(..)
}
}
Now all we have to do is test that the validator is invoked correctly. The actual process of validation (the previously private logic) can be tested in pure isolation. There will be no need for complex test set up to ensure this validation passes.
Some great answers. One thing I didn't see mentioned is that with test-driven development (TDD), private methods are created during the refactoring phase (look at Extract Method for an example of a refactoring pattern), and should therefore already have the necessary test coverage. If done correctly (and of course, you're going to get a mixed bag of opinions when it comes to correctness), you shouldn't have to worry about having to make a private method public just so that you can test it.
If you are using C# you can make method internal. That way you don't pollute public API.
Then add attribute to dll
[assembly: InternalsVisibleTo("MyTestAssembly")]
Now all the methods are visible in your MyTestAssembly project. Maybe not perfect, but better then making private method public just to test it.
Why not split out the stack management algorithm into a utility class? The utility class can manage the stacks and provide public accessors. Its unit tests can be focussed on implementation detail. Deep tests for algorithmically tricky classes are very helpful in wrinkling out edge cases and ensuring coverage.
Then your current class can cleanly delegate to the utility class without exposing any implementation details. Its tests will relate to the pagination requirement as others have recommended.
In java, there's also the option of making it package private (ie leaving off the visibility modifier). If your unit tests are in the same package as the class being tested it should then be able to see these methods, and is a bit safer than making the method completely public.
Private methods are usually used as "helper" methods. Therefore they only return basic values and never operate on specific instances of objects.
You have a couple of options if you want to test them.
Use reflection
Give the methods package access
Alternatively you could create a new class with the helper method as a public method if it is a good enough candidate for a new class.
There is a very good article here on this.
Use reflection to access the private variables if you need to.
But really, you don't care about the internal state of the class, you just want to test that the public methods return what you expect in the situations you can anticipate.
In your update you say that it's good to just test using the public API.
There is actually two schools here.
Black box testing
The black box school says that the class should be considered as a black box that no one can see the implementation inside it. The only way to test this is through the public API -- just like the user's of the class will be using it.
white box testing.
The white box school thinks it naturally to use the knowledge about the implementation of the class and then test the class to know that it works as it should.
I really cannot take side in the discussion. I just thought it would be interesting to know that there are two distinct ways to test a class (or a library or whatever).
in terms of unit testing, you should definitely not add more methods; I believe you would better make a test case just about your first() method, which would be called before each test; then you can call multiple times the - next(), previous() and last() to see if the outcomes match your expectation.
I guess if you don't add more methods to your class (just for testing purposes), you would stick to the "black box" principle of testing;
You should never ever ever let your tests dictate your code. I'm not speaking about TDD or other DDs I mean, exactly what your asking. Does your app need those methods to be public. If it does then test them. If it does not then then don't make them public just for testing. Same with variables and others. Let your application's needs dictate the code, and let your tests test that the need is met. (Again I don't mean testing first or not I mean changing a classes structure to meet a testing goal).
Instead you should "test higher". Test the method that calls the private method. But your tests should be testing your applications needs and not your "implementation decisions".
For example (bod pseudo code here);
public int books(int a) {
return add(a, 2);
}
private int add(int a, int b) {
return a+b;
}
There is no reason to test "add" you can test "books" instead.
Never ever let your tests make code design decisions for you. Test that you get the expected result, not how you get that result.
I would say it is a bad idea for I am not sure whether you get any benefit and potentially problems down the line. If you are changing the contract of a calls, just to test a private method, you're not testing the class in how it would be used, but creating an artificial scenario which you never intended to happen.
Furthermore, by declaring the method as public, what's to say that in six months time (after forgetting that the only reason for making a method public is for testing) that you (or if you've handed the project over) somebody completely different won't use it, leading to potentially unintended consequences and/or a maintenance nightmare.
First see if the method ought to be extracted into another class and made public. If that's not the case, make it package protected and in Java annotate with #VisibleForTesting.
Private methods that you want to test in isolation are an indication that there's another "concept" buried in your class. Extract that "concept" to its own class and test it as a separate "unit".
Take a look at this video for a really interesting take on the subject.
There are actually situations when you should do this (e.g. when you're implementing some complex algorithms). Just do it package-private and this will be enough.
But in most cases probably you have too complex classes which requires factoring out logic into other classes.
I tend to agree that the bonuses of having it unit tested outweigh the problems of increasing the visibility of some of the members. A slight improvement is to make it protected and virtual, then override it in a test class to expose it.
Alternatively, if it's functionality you want to test separately does it not suggest a missing object from your design? Maybe you could put it in a separate testable class...then your existing class just delegates to an instance of this new class.
I generally keep the test classes in the same project/assembly as the classes under test.
This way I only need internal visibility to make functions/classes testable.
This somewhat complicates your building process, which needs to filter out the test classes.
I achieve this by naming all my test classes TestedClassTest and using a regex to filter those classes.
This of course only applies to the C# / .NET part of your question
I will often add a method called something like validate, verify, check, etc, to a class so that it can be called to test the internal state of an object.
Sometimes this method is wrapped in an ifdef block (I write mostly in C++) so that it isn't compiled for release. But it's often useful in release to provide validation methods that walk the program's object trees checking things.
Guava has a #VisibleForTesting annotation for marking methods that have enlarged scope (package or public) that they would otherwise. I use a #Private annotation for the same thing.
While the public API must be tested, sometimes it's convenient and sensible to get at stuff that wouldn't normally be public.
When:
a class is made significantly less readable, in toto, by breaking it up into multiple classes,
just to make it more testable,
and providing some test access to the innards would do the trick
it seems like religion is trumping engineering.
I usually leave those methods as protected and place the unit test within the same package (but in another project or source folder), where they can access all the protected methods because the class loader will place them within the same namespace.
No, because there are better ways of skinning that cat.
Some unit test harnesses rely on macros in the class definition which automagically expand to create hooks when built in test mode. Very C style, but it works.
An easier OO idiom is to make anything you want to test "protected" rather than "private". The test harness inherits from the class under test, and can then access all protected members.
Or you go for the "friend" option. Personally this is the feature of C++ I like least because it breaks the encapsulation rules, but it happens to be necessary for how C++ implements some features, so hey ho.
Anyway, if you're unit testing then you're just as likely to need to inject values into those members. White box texting is perfectly valid. And that really would break your encapsulation.
In .Net there is a special class called PrivateObject deigned specifically to allow you to access private methods of a class.
See more about it on the MSDN or here on Stack Overflow
(I am wondering that no one has mentioned it so far.)
There are situations though which this is not enough, in which case you will have to use reflection.
Still I would adhere to the general recommendation not to test private methods, however as usual there are always exceptions.
Very answered question.
IHMO, the excellent answer from #BlueRaja - Danny Pflughoeft is one of the best.
Lots of answers suggest only testing the public interface, but IMHO
this is unrealistic - if a method does something that takes 5 steps,
you'll want to test those five steps separately, not all together.
This requires testing all five methods, which (other than for testing)
might otherwise be private.
Above all, I want to stress that the question "should we make a private method public to unit test it" is a question which an objectively correct answer depends on multiple parameters.
So I think that in some cases we have not to and in others we should.
Making public a private method or extracting the private method as a public method in another class (new or existing)?
It is rarely the best way.
A unit test has to test the behavior of one API method/function.
If you test a public method that invokes another public method belonging to the same component, you don't unit test the method. You test multiple public methods at the same time.
As a consequence, you may duplicate tests, test fixtures, test assertions, the test maintenance and more generally the application design.
As the tests value decreases, they often lose interest for developers that write or maintain them.
To avoid all this duplication, instead of making the private method public method, in many cases a better solution is extracting the private method as a public method in a new or an existing class.
It will not create a design defect.
It will make the code more meaningful and the class less bloat.
Besides, sometimes the private method is a routine/subset of the class while the behavior suits better in a specific structure.
At last, it also makes the code more testable and avoid tests duplication.
We can indeed prevent tests duplication by unit testing the public method in its own test class and in the test class of the client classes, we have just to mock the dependency.
Mocking private methods?
While it is possible by using reflection or with tools as PowerMock, I think that it is often a way to bypass a design issue.
A private member is not designed to be exposed to other classes.
A test class is a class as another. So we should apply the same rule for it.
Mocking public methods of the object under test?
You may want to change the modifier private to public to test the method.
Then to test the method that uses this refactored public method, you may be tempted to mock the refactored public method by using tools as Mockito (spy concept) but similarly to mocking private methods, we should avoid to mock the object under test.
The Mockito.spy() documentation says it itself :
Creates a spy of the real object. The spy calls real methods unless they are > > stubbed.
Real spies should be used carefully and occasionally, for example when
dealing with legacy code.
By experience, using spy() generally decreases the test quality and its readability.
Besides, it is much more error-prone as the object under test is both a mock and a real object.
It is often the best way to write an invalid acceptance test.
Here is a guideline I use to decide whether a private method should stay private or be refactored.
Case 1) Never make a private method public if this method is invoked once.
It is a private method for a single method. So you could never duplicate test logic as it is invoke once.
Case 2) You should wonder whether a private method should be refactored as a public method if the private method is invoked more than once.
How to decide ?
The private method doesn't produce duplication in the tests.
-> Keep the method private as it is.
The private method produces duplication in the tests. That is, you need to repeat some tests, to assert the same logic for each test that unit-tests public methods using the private method.
-> If the repeated processing may make part of the API provided to clients (no security issue, no internal processing, etc...), extract the private method as a public method in a new class.
-> Otherwise, if the repeated processing has not to make part of the API provided to clients (security issue, internal processing, etc...), don't widen the visibility of the private method to public.
You may leave it unchanged or move the method in a private package class that will never make part of the API and would be never accessible by clients.
Code examples
The examples rely on Java and the following libraries : JUnit, AssertJ (assertion matcher) and Mockito.
But I think that the overall approach is also valid for C#.
1) Example where the private method doesn't create duplication in the test code
Here is a Computation class that provides methods to perform some computations.
All public methods use the mapToInts() method.
public class Computation {
public int add(String a, String b) {
int[] ints = mapToInts(a, b);
return ints[0] + ints[1];
}
public int minus(String a, String b) {
int[] ints = mapToInts(a, b);
return ints[0] - ints[1];
}
public int multiply(String a, String b) {
int[] ints = mapToInts(a, b);
return ints[0] * ints[1];
}
private int[] mapToInts(String a, String b) {
return new int[] { Integer.parseInt(a), Integer.parseInt(b) };
}
}
Here is the test code :
public class ComputationTest {
private Computation computation = new Computation();
#Test
public void add() throws Exception {
Assert.assertEquals(7, computation.add("3", "4"));
}
#Test
public void minus() throws Exception {
Assert.assertEquals(2, computation.minus("5", "3"));
}
#Test
public void multiply() throws Exception {
Assert.assertEquals(100, computation.multiply("20", "5"));
}
}
We could see that the invocation of the private method mapToInts() doesn't duplicate the test logic.
It is an intermediary operation and it doesn't produce a specific result that we need to assert in the tests.
2) Example where the private method creates undesirable duplication in the test code
Here is a MessageService class that provides methods to create messages.
All public methods use the createHeader() method :
public class MessageService {
public Message createMessage(String message, Credentials credentials) {
Header header = createHeader(credentials, message, false);
return new Message(header, message);
}
public Message createEncryptedMessage(String message, Credentials credentials) {
Header header = createHeader(credentials, message, true);
// specific processing to encrypt
// ......
return new Message(header, message);
}
public Message createAnonymousMessage(String message) {
Header header = createHeader(Credentials.anonymous(), message, false);
return new Message(header, message);
}
private Header createHeader(Credentials credentials, String message, boolean isEncrypted) {
return new Header(credentials, message.length(), LocalDate.now(), isEncrypted);
}
}
Here is the test code :
import java.time.LocalDate;
import org.assertj.core.api.Assertions;
import org.junit.Test;
import junit.framework.Assert;
public class MessageServiceTest {
private MessageService messageService = new MessageService();
#Test
public void createMessage() throws Exception {
final String inputMessage = "simple message";
final Credentials inputCredentials = new Credentials("user", "pass");
Message actualMessage = messageService.createMessage(inputMessage, inputCredentials);
// assertion
Assert.assertEquals(inputMessage, actualMessage.getMessage());
Assertions.assertThat(actualMessage.getHeader())
.extracting(Header::getCredentials, Header::getLength, Header::getDate, Header::isEncryptedMessage)
.containsExactly(inputCredentials, 9, LocalDate.now(), false);
}
#Test
public void createEncryptedMessage() throws Exception {
final String inputMessage = "encryted message";
final Credentials inputCredentials = new Credentials("user", "pass");
Message actualMessage = messageService.createEncryptedMessage(inputMessage, inputCredentials);
// assertion
Assert.assertEquals("AƧ4B36ddflm1Dkok49d1d9gaz", actualMessage.getMessage());
Assertions.assertThat(actualMessage.getHeader())
.extracting(Header::getCredentials, Header::getLength, Header::getDate, Header::isEncryptedMessage)
.containsExactly(inputCredentials, 9, LocalDate.now(), true);
}
#Test
public void createAnonymousMessage() throws Exception {
final String inputMessage = "anonymous message";
Message actualMessage = messageService.createAnonymousMessage(inputMessage);
// assertion
Assert.assertEquals(inputMessage, actualMessage.getMessage());
Assertions.assertThat(actualMessage.getHeader())
.extracting(Header::getCredentials, Header::getLength, Header::getDate, Header::isEncryptedMessage)
.containsExactly(Credentials.anonymous(), 9, LocalDate.now(), false);
}
}
We could see that the invocation of the private method createHeader() creates some duplication in the test logic.
createHeader() creates indeed a specific result that we need to assert in the tests.
We assert 3 times the header content while a single assertion should be required.
We could also note that the asserting duplication is close between the methods but not necessary the same as the private method has a specific logic :
Of course, we could have more differences according to the logic complexity of the private method.
Besides, at each time we add a new public method in MessageService that calls createHeader(), we will have to add this assertion.
Note also that if createHeader() modifies its behavior, all these tests may also need to be modified.
Definitively, it is not a very good design.
Refactoring step
Suppose we are in a case where createHeader() is acceptable to make part of the API.
We will start by refactoring the MessageService class by changing the access modifier of createHeader() to public :
public Header createHeader(Credentials credentials, String message, boolean isEncrypted) {
return new Header(credentials, message.length(), LocalDate.now(), isEncrypted);
}
We could now test unitary this method :
#Test
public void createHeader_with_encrypted_message() throws Exception {
...
boolean isEncrypted = true;
// action
Header actualHeader = messageService.createHeader(credentials, message, isEncrypted);
// assertion
Assertions.assertThat(actualHeader)
.extracting(Header::getCredentials, Header::getLength, Header::getDate, Header::isEncryptedMessage)
.containsExactly(Credentials.anonymous(), 9, LocalDate.now(), true);
}
#Test
public void createHeader_with_not_encrypted_message() throws Exception {
...
boolean isEncrypted = false;
// action
messageService.createHeader(credentials, message, isEncrypted);
// assertion
Assertions.assertThat(actualHeader)
.extracting(Header::getCredentials, Header::getLength, Header::getDate, Header::isEncryptedMessage)
.containsExactly(Credentials.anonymous(), 9, LocalDate.now(), false);
}
But what about the tests we write previously for public methods of the class that use createHeader() ?
Not many differences.
In fact, we are still annoyed as these public methods still need to be tested concerning the returned header value.
If we remove these assertions, we may not detect regressions about it.
We should be able to naturally isolate this processing but we cannot as the createHeader() method belongs to the tested component.
That's why I explained at the beginning of my answer that in most of cases, we should favor the extraction of the private method in another class to the change of the access modifier to public.
So we introduce HeaderService :
public class HeaderService {
public Header createHeader(Credentials credentials, String message, boolean isEncrypted) {
return new Header(credentials, message.length(), LocalDate.now(), isEncrypted);
}
}
And we migrate the createHeader() tests in HeaderServiceTest.
Now MessageService is defined with a HeaderService dependency:
public class MessageService {
private HeaderService headerService;
public MessageService(HeaderService headerService) {
this.headerService = headerService;
}
public Message createMessage(String message, Credentials credentials) {
Header header = headerService.createHeader(credentials, message, false);
return new Message(header, message);
}
public Message createEncryptedMessage(String message, Credentials credentials) {
Header header = headerService.createHeader(credentials, message, true);
// specific processing to encrypt
// ......
return new Message(header, message);
}
public Message createAnonymousMessage(String message) {
Header header = headerService.createHeader(Credentials.anonymous(), message, false);
return new Message(header, message);
}
}
And in MessageService tests, we don't need any longer to assert each header value as this is already tested.
We want to just ensure that Message.getHeader() returns what HeaderService.createHeader() has returned.
For example, here is the new version of createMessage() test :
#Test
public void createMessage() throws Exception {
final String inputMessage = "simple message";
final Credentials inputCredentials = new Credentials("user", "pass");
final Header fakeHeaderForMock = createFakeHeader();
Mockito.when(headerService.createHeader(inputCredentials, inputMessage, false))
.thenReturn(fakeHeaderForMock);
// action
Message actualMessage = messageService.createMessage(inputMessage, inputCredentials);
// assertion
Assert.assertEquals(inputMessage, actualMessage.getMessage());
Assert.assertSame(fakeHeaderForMock, actualMessage.getHeader());
}
Note the assertSame() use to compare the object references for the headers and not the contents.
Now, HeaderService.createHeader() may change its behavior and return different values, it doesn't matter from the MessageService tests point of view.
The point of the unit test is to confirm the workings of the public api for that unit. There should be no need to make a private method exposed only for testing, if so then your interface should be rethought. Private methods can be thought as 'helper' methods to the public interface and therefore are tested via the public interface as they would be calling into the private methods.
The only reason I can see that you have a 'need' to do this is that your class is not properly designed for testing in mind.

Help designing a order manager class

So I have an order manager class that looks like:
public class OrderManager
{
private IDBFactory _dbFactory;
private Order _order;
public OrderManager(IDBFactory dbFactory)
{
_dbFactory = dbFactory;
}
public void Calculate()
{
_order.SubTotal
_order.ShippingTotal
_order.TaxTotal
_order.GrandTotal
}
}
Now, the point here is to have a flexible/testible design.
I am very concerned about being able to write solid unit tests around this Calculate method.
Considerations:
1. Shipping has to be abstracted out, be loose coupled since the implementation of shipping could vary depending on USPS, UPS, fedex etc. (they have their own API's).
2. same goes with calculating tax
Should I just create a Tax and Shipping Manager class, and have a tax/shipping factory in the constructor? (exactly how I have designed my OrderManager) class?
(the only thing that I can think of, in terms of what I am "missing", is IoC, but I don't mind that and don't need that extra level of abstraction in my view).
Well, you are already moving towards dependency injection in your approach, so why not go the whole hog and use some sort of IoC container to handle this for you?
Yes, if you want it abstrated out, then create a separate class for it. If you want to truly unit test what is left, abstract out an interface and use mock testing. The problem is, the more you abstract out like this, the more plumbing together there is to do and the more you will find yourself wishing you were using an IoC framework of some kind.
You are suggesting constructor injection, which is a common approach. You also come across property injection (parameterless constructor, set properties instead). And there are also frameworks that ask you to implement an initialization interface of some kind that allows the IoC framework to do the initialization for you in a method call. Use whatever you feel most comfortable with.
I do think an IOC would help with the plumbing of instantiating the correct concrete classes but you still need to get your design the way you want it. I do think you need to abstract away the shipping with an interface that you can implement with a class for each of your shippers (USPS, UPS, FEDEx, etc) and could use a Factory class (ShippingManager) to pass the correct one out or depend on the IOC to do that for you.
public interface IShipper
{
//whatever goes into calculating shipping.....
decimal CalculateShippingCost(GeoData geo, decimal packageWeight);
}
You could also just inject an IShipper and ITaxer concrete classes into your OrderManager and you calculate method just calls into those classes....and can use an IOC nicely to handle that.
Just a thought:
Your Calculate() method taking no parameters, returning nothing and acting on private fields is not how I would do it. I would write it as a static method that takes in some numbers, an IShippingProvider and an ITaxJurisdiction and returns a dollar total. That way you have an opportunity to cache the expensive calls to UPS and your tax tables using memoization.
Could be that I'm prejudiced against public methods that work like that. They have burned me in the past trying to bind to controls, use code generators, etc.
EDIT: as for dependency injection/IOC, I don't see the need. This is what interfaces were made for. You're not going to be loading up a whole array of wacky classes, just some implementations of the same weight/zipcode combo.
That's what I would say if I were your boss.
I would take the Calculate method out into a class. Depending on your circumstances OrderCalculator might need to be aware of VAT, Currency, Discounts, ...
Just a thought.

C# has abstract classes and interfaces, should it also have "mixins"?

Every so often, I run into a case where I want a collection of classes all to possess similar logic. For example, maybe I want both a Bird and an Airplane to be able to Fly(). If you're thinking "strategy pattern", I would agree, but even with strategy, it's sometimes impossible to avoid duplicating code.
For example, let's say the following apply (and this is very similar to a real situation I recently encountered):
Both Bird and Airplane need to hold an instance of an object that implements IFlyBehavior.
Both Bird and Airplane need to ask the IFlyBehavior instance to Fly() when OnReadyToFly() is called.
Both Bird and Airplane need to ask the IFlyBehavior instance to Land() when OnReadyToLand() is called.
OnReadyToFly() and OnReadyToLand() are private.
Bird inherits Animal and Airplane inherits PeopleMover.
Now, let's say we later add Moth, HotAirBalloon, and 16 other objects, and let's say they all follow the same pattern.
We're now going to need 20 copies of the following code:
private IFlyBehavior _flyBehavior;
private void OnReadyToFly()
{
_flyBehavior.Fly();
}
private void OnReadyToLand()
{
_flyBehavior.Land();
}
Two things I don't like about this:
It's not very DRY (the same nine lines of code are repeated over and over again). If we discovered a bug or added a BankRight() to IFlyBehavior, we would need to propogate the changes to all 20 classes.
There's not any way to enforce that all 20 classes implement this repetitive internal logic consistently. We can't use an interface because interfaces only permit public members. We can't use an abstract base class because the objects already inherit base classes, and C# doesn't allow multiple inheritance (and even if the classes didn't already inherit classes, we might later wish to add a new behavior that implements, say, ICrashable, so an abstract base class is not always going to be a viable solution).
What if...?
What if C# had a new construct, say pattern or template or [fill in your idea here], that worked like an interface, but allowed you to put private or protected access modifiers on the members? You would still need to provide an implementation for each class, but if your class implemented the PFlyable pattern, you would at least have a way to enforce that every class had the necessary boilerplate code to call Fly() and Land(). And, with a modern IDE like Visual Studio, you'd be able to automatically generate the code using the "Implement Pattern" command.
Personally, I think it would make more sense to just expand the meaning of interface to cover any contract, whether internal (private/protected) or external (public), but I suggested adding a whole new construct first because people seem to be very adamant about the meaning of the word "interface", and I didn't want semantics to become the focus of people's answers.
Questions:
Regardless of what you call it, I'd like to know whether the feature I'm suggesting here makes sense. Do we need some way to handle cases where we can't abstract away as much code as we'd like, due to the need for restrictive access modifiers or for reasons outside of the programmer's control?
Update
From AakashM's comment, I believe there is already a name for the feature I'm requesting: a Mixin. So, I guess my question can be shortened to: "Should C# allow Mixins?"
The problem you describe could be solved using the Visitor pattern (everything can be solved using the Visitor pattern, so beware! )
The visitor pattern lets you move the implementation logic towards a new class. That way you do not need a base class, and a visitor works extremely well over different inheritance trees.
To sum up:
New functionality does not need to be added to all different types
The call to the visitor can be pulled up to the root of each class hierarchy
For a reference, see the Visitor pattern
Cant we use extension methods for this
public static void OnReadyToFly(this IFlyBehavior flyBehavior)
{
_flyBehavior.Fly()
}
This mimics the functionality you wanted (or Mixins)
Visual Studio already offers this in 'poor mans form' with code snippets. Also, with the refactoring tools a la ReSharper (and maybe even the native refactoring support in Visual Studio), you get a long way in ensuring consistency.
[EDIT: I didn't think of Extension methods, this approach brings you even further (you only need to keep the _flyBehaviour as a private variable). This makes the rest of my answer probably obsolete...]
However; just for the sake of the discussion: how could this be improved? Here's my suggestion.
One could imagine something like the following to be supported by a future version of the C# compiler:
// keyword 'pattern' marks the code as eligible for inclusion in other classes
pattern WithFlyBehaviour
{
private IFlyBehavior_flyBehavior;
private void OnReadyToFly()
{
_flyBehavior.Fly();
}
[patternmethod]
private void OnReadyToLand()
{
_flyBehavior.Land();
}
}
Which you could use then something like:
// probably the attribute syntax can not be reused here, but you get the point
[UsePattern(FlyBehaviour)]
class FlyingAnimal
{
public void SetReadyToFly(bool ready)
{
_readyToFly = ready;
if (ready) OnReadyToFly(); // OnReadyToFly() callable, although not explicitly present in FlyingAnimal
}
}
Would this be an improvement? Probably. Is it really worth it? Maybe...
You just described aspect oriented programming.
One popular AOP implementation for C# seems to be PostSharp (Main site seems to be down/not working for me though, this is the direct "About" page).
To follow up on the comment: I'm not sure if PostSharp supports it, but I think you are talking about this part of AOP:
Inter-type declarations provide a way
to express crosscutting concerns
affecting the structure of modules.
Also known as open classes, this
enables programmers to declare in one
place members or parents of another
class, typically in order to combine
all the code related to a concern in
one aspect.
Could you get this sort of behavior by using the new ExpandoObject in .NET 4.0?
Scala traits were developed to address this kind of scenario. There's also some research to include traits in C#.
UPDATE: I created my own experiment to have roles in C#. Take a look.
I will use extension methods to implement the behaviour as the code shows.
Let Bird and Plane objects implement a property for IFlyBehavior object for an interface IFlyer
public interface IFlyer
{
public IFlyBehavior FlyBehavior
}
public Bird : IFlyer
{
public IFlyBehaviour FlyBehavior {get;set;}
}
public Airplane : IFlyer
{
public IFlyBehaviour FlyBehavior {get;set;}
}
Create an extension class for IFlyer
public IFlyerExtensions
{
public void OnReadyToFly(this IFlyer flyer)
{
flyer.FlyBehavior.Fly();
}
public void OnReadyToLand(this IFlyer flyer)
{
flyer.FlyBehavior.Land();
}
}

Good Case For Interfaces

I work at a company where some require justification for the use of an Interface in our code (Visual Studio C# 3.5).
I would like to ask for an Iron Clad reasoning that interfaces are required for. (My goal is to PROVE that interfaces are a normal part of programming.)
I don't need convincing, I just need a good argument to use in the convincing of others.
The kind of argument I am looking for is fact based, not comparison based (ie "because the .NET library uses them" is comparison based.)
The argument against them is thus: If a class is properly setup (with its public and private members) then an interface is just extra overhead because those that use the class are restricted to public members. If you need to have an interface that is implemented by more than 1 class then just setup inheritance/polymorphism.
Code decoupling. By programming to interfaces you decouple the code using the interface from the code implementing the interface. This allows you to change the implementation without having to refactor all of the code using it. This works in conjunction with inheritance/polymorphism, allowing you to use any of a number of possible implementations interchangeably.
Mocking and unit testing. Mocking frameworks are most easily used when the methods are virtual, which you get by default with interfaces. This is actually the biggest reason why I create interfaces.
Defining behavior that may apply to many different classes that allows them to be used interchangeably, even when there isn't a relationship (other than the defined behavior) between the classes. For example, a Horse and a Bicycle class may both have a Ride method. You can define an interface IRideable that defines the Ride behavior and any class that uses this behavior can use either a Horse or Bicycle object without forcing an unnatural inheritance between them.
The argument against them is thus: If
a class is properly setup (with its
public and private members) then an
interface is just extra overhead
because those that use the class are
restricted to public members. If you
need to have an interface that is
implemented by more than 1 class then
just setup inheritance/polymorphism.
Consider the following code:
interface ICrushable
{
void Crush();
}
public class Vehicle
{
}
public class Animal
{
}
public class Car : Vehicle, ICrushable
{
public void Crush()
{
Console.WriteLine( "Crrrrrassssh" );
}
}
public class Gorilla : Animal, ICrushable
{
public void Crush()
{
Console.WriteLine( "Sqqqquuuuish" );
}
}
Does it make any sense at all to establish a class hierarchy that relates Animals to Vehicles even though both can be crushed by my giant crushing machine? No.
In addition to things explained in other answers, interfaces allow you simulate multiple inheritance in .NET which otherwise is not allowed.
Alas as someone said
Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand.
To enable unit testing of the class.
To track dependencies efficiently (if the interface isn't checked out and touched, only the semantics of the class can possibly have changed).
Because there is no runtime overhead.
To enable dependency injection.
...and perhaps because it's friggin' 2009, not the 70's, and modern language designers actually have a clue about what they are doing?
Not that interfaces should be thrown at every class interface: just those which are central to the system, and which are likely to experience significant change and/or extension.
Interfaces and abstract classes model different things. You derive from a class when you have an isA relationship so the base class models something concrete. You implement an interface when your class can perform a specific set of tasks.
Think of something that's Serializable, it doesn't really make sense (from a design/modelling point of view) to have a base class called Serializable as it doesn't make sense to say something isA Serializable. Having something implement a Serializable interface makes more sense as saying 'this is something the class can do, not what the class is'
Interfaces are not 'required for' at all, it's a design decision. I think you need to convince yourself, why, on a case-by-case basis, it is beneficial to use an interface, because there IS an overhead in adding an interface. On the other hand, to counter the argument against interfaces because you can 'simply' use inheritance: inheritance has its draw backs, one of them is that - at least in C# and Java - you can only use inheritance once(single inheritance); but the second - and maybe more important - is that, inheritance requires you to understand the workings of not only the parent class, but all of the ancestor classes, which makes extension harder but also more brittle, because a change in the parent class' implementation could easily break the subclasses. This is the crux of the "composition over inheritance" argument that the GOF book taught us.
You've been given a set of guidelines that your bosses have thought appropriate for your workplace and problem domain. So to be persuasive about changing those guidelines, it's not about proving that interfaces are a good thing in general, it's about proving that you need them in your workplace.
How do you prove that you need interfaces in the code you write in your workplace? By finding a place in your actual codebase (not in some code from somebody else's product, and certainly not in some toy example about Duck implementing the makeNoise method in IAnimal) where an interface-based solution is better than an inheritance-based solution. Show your bosses the problem you're facing, and ask whether it makes sense to modify the guidelines to accommodate situations like that. It's a teachable moment where everyone is looking at the same facts instead of hitting each other over the head with generalities and speculations.
The guideline seems to be driven by a concern about avoiding overengineering and premature generalisation. So if you make an argument along the lines of we should have an interface here just in case in future we have to..., it's well-intentioned, but for your bosses it sets off the same over-engineering alarm bells that motivated the guideline in the first place.
Wait until there's a good objective case for it, that goes both for the programming techniques you use in production code and for the things you start arguments with your managers about.
Test Driven Development
Unit Testing
Without interfaces producing decoupled code would be a pain. Best practice is to code against an interface rather than a concrete implementation. Interfaces seem rubbish at first but once you discover the benefits you'll always use them.
You can implement multiple interfaces. You cannot inherit from multiple classes.
..that's it. The points others are making about code decoupling and test-driven development don't get to the crux of the matter because you can do those things with abstract classes too.
Interfaces allow you to declare a concept that can be shared amongst many types (IEnumerable) while allowing each of those types to have its own inheritance hierarchy.
In this case, what we're saying is "this thing can be enumerated, but that is not its single defining characteristic".
Interfaces allow you to make the minimum amount of decisions necessary when defining the capabilities of the implementer. When you create a class instead of an interface, you have already declared that your concept is class-only and not usable for structs. You also make other decisions when declaring members in a class, such as visibility and virtuality.
For example, you can make an abstract class with all public abstract members, and that is pretty close to an interface, but you have declared that concept as overridable in all child classes, whereas you wouldn't have to have made that decision if you used an interface.
They also make unit testing easier, but I don't believe that is a strong argument, since you can build a system without unit tests (not recommended).
If your shop is performing automated testing, interfaces are a great boon to dependency injection and being able to test a unit of software in isolation.
The problem with the inheritance argument is that you'll either have a gigantic god class or a hierarchy so deep, it'll make your head spin. On top of that, you'll end up with methods on a class you don't need or don't make any sense.
I see a lot of "no multiple inheritance" and while that's true, it probably won't phase your team because you can have multiple levels of inheritance to get what they'd want.
An IDisposable implementation comes to mind. Your team would put a Dispose method on the Object class and let it propagate through the system whether or not it made sense for an object or not.
An interface declares a contract that any object implementing it will adhere to. This makes ensuring quality in code so much easier than trying to enforce written (not code) or verbal structure, the moment a class is decorated with the interface reference the requirements/contract is clear and the code won't compile till you've implemented that interface completely and type-safe.
There are many other great reasons for using Interfaces (listed here) but probably don't resonate with management quite as well as a good, old-fashioned 'quality' statement ;)
Well, my 1st reaction is that if you've to explain why you need interfaces, it's a uphill battle anyways :)
that being said, other than all the reasons mentioned above, interfaces are the only way for loosely coupled programming, n-tier architectures where you need to update/replace components on the fly etc. - in personal experience however that was too esoteric a concept for the head of architecture team with the result that we lived in dll hell - in the .net world no-less !
Please forgive me for the pseudo code in advance!
Read up on SOLID principles. There are a few reasons in the SOLID principles for using Interfaces. Interfaces allow you to decouple your dependancies on implementation. You can take this a step further by using a tool like StructureMap to really make the coupling melt away.
Where you might be used to
Widget widget1 = new Widget;
This specifically says that you want to create a new instance of Widget. However if you do this inside of a method of another object you are now saying that the other object is directly dependent on the use of Widget. So we could then say something like
public class AnotherObject
{
public void SomeMethod(Widget widget1)
{
//..do something with widget1
}
}
We are still tied to the use of Widget here. But at least this is more testable in that we can inject the implementation of Widget into SomeMethod. Now if we were to use an Interface instead we could further decouple things.
public class AnotherObject
{
public void SomeMethod(IWidget widget1)
{
//..do something with widget1
}
}
Notice that we are now not requiring a specific implementation of Widget but instead we are asking for anything that conforms to IWidget interface. This means that anything could be injected which means that in the day to day use of the code we could inject an actual implementation of Widget. But this also means that when we want to test this code we could inject a fake/mock/stub (depending on your understanding of these terms) and test our code.
But how can we take this further. With the use of StructureMap we can decouple this code even more. With the last code example our calling code my look something like this
public class AnotherObject
{
public void SomeMethod(IWidget widget1)
{
//..do something with widget1
}
}
public class CallingObject
{
public void AnotherMethod()
{
IWidget widget1 = new Widget();
new AnotherObject().SomeMethod(widget1);
}
}
As you can see in the above code we removed the dependency in the SomeMethod by passing in an object that conforms to IWidget. But in the CallingObject().AnotherMethod we still have the dependency. We can use StructureMap to remove this dependency too!
[PluginFamily("Default")]
public interface IAnotherObject
{
...
}
[PluginFamily("Default")]
public interface ICallingObject
{
...
}
[Pluggable("Default")]
public class AnotherObject : IAnotherObject
{
private IWidget _widget;
public AnotherObject(IWidget widget)
{
_widget = widget;
}
public void SomeMethod()
{
//..do something with _widget
}
}
[Pluggable("Default")]
public class CallingObject : ICallingObject
{
public void AnotherMethod()
{
ObjectFactory.GetInstance<IAnotherObject>().SomeMethod();
}
}
Notice that no where in the above code are we instantiating an actual implementation of AnotherObject. Because everything is wired for StructurMap we can allow StructureMap to pass in the appropriate implementations depending on when and where the code is ran. Now the code is truely flexible in that we can specify via configuration or programatically in a test which implementation we want to use. This configuration can be done on the fly or as part of a build process, etc. But it doesn't have to be hard wired anywhere.
Appologies as this doesn't answer your question regarding a case for Interfaces.
However I suggest getting the person in question to read..
Head First Design Patterns
-- Lee
I don't understand how its extra overhead.
Interfaces provide flexibility, manageable code, and reusability. Coding to an interface you don't need to worry about the concreted implementation code or logic of the certain class you are using. You just expect a result. Many class have different implementation for the same feature thing (StreamWriter,StringWriter,XmlWriter)..you do not need to worry about how they implement the writing, you just need to call it.

Categories