Adding methods to existing class - inheritance/injection - c#

I have a dozen service classes that were built for WCF, e.g.:
public class BookingService : IBookingService
{
public void BookTheThing(int ThingID) { .. }
}
We are aiming to reuse these classes as direct libraries (not WCF) and create a separate service library which would allow us preserve and expose those existing methods and add new ones. Here are 2 possibilities I've come up with based on my limited experience:
*Option#1 - Inject original class and create identical methods to expose it's functionality:
public class BookingServiceNew : IBookingServiceNew
{
public BookingServiceNew(IBookingService service) { _baseService = service; }
public void BookTheThing(int ThingId) { _baseService.BookTheThing(ThingId); }
public bool OurNewMethod1(int ThingId) { return true; }
public int OurNewMethod2(int ThingId) { return 1; }
}
*Option#2 - Inherit original service class, which would automatically expose its methods as part of the class, and then add our own stuff
public class BookingServiceNew : BookingService, IBookingServiceNew
{
public bool OurNewMethod1(int ThingId) { return true; }
public int OurNewMethod2(int ThingId) { return 1; }
}
Option#1 seems like it will have some more code and duplication having to create a stub for every method in the implentation and interface. Option#2 seems like it could have some issues with dependency injection on the client, where working against IBookingServiceNew would only provide access to OurNewMethod1 & OurNewMethod2.
Again, these options I've come with are based on my very limited experience and I would appreciate your thoughts and suggestions on a better approach/practice/pattern to follow.
Thanks

I'd recommend going with option 1 (composition) over inheritance. Yes, this requires additional boilerplate code to expose each method of the inner service. However, this code has no logic so we're not really "repeating" anything. Furthermore, by using composition you gain a ton of flexibility down the line; if you decide you don't want to expose the same interface in IBookingServiceNew as in IBookingService you can simply remove/change those methods without modifying the original BookingService implementation. You can also easily swap in a new implementation of IBookingService (e. g. a mock in a unit test).
In contrast, using class inheritance to avoid the boilerplate saves some code in the short run, at a big cost to flexibility and maintenance. For one, you give up the ability to extend a different base class in the future. Now, your BookingService class must be designed for inheritance; you'll have to be careful about which internal methods and state are exposed to the subclass, and you need to worry about introducing conflicts with methods in the derived class. In general, the API exposed by a class which you expect to be extended is much more complex and harder for the consumer to understand than the one exposed by an interface. As a general rule, I try to avoid using class inheritance unless I will actually be making use of polymorphism (as opposed to just including methods from the base class). In this case, you're already using interfaces, so you have no need for the class polymorphism.
Finally, note that your concern about IBookingServiceNew not exposing the methods on IBookingService is easily addressed by either (1) putting those methods on IBookingServiceNew as well or (2) having IBookingServiceNew extend IBookingService.

What about extension methods?
public static class BookingServiceExtensions
{
public static bool OurNewMethod1(this IBookingService service, int ThingId)
{
...
}
}

Related

How to avoid propagation of type constraints?

In the underlying use case I'm dealing with different serialization formats and depending on the configured format (via DI) a certain factory is used. However, I think I run into a more general problem here, therefore I simplified the following use case.
Suppose I have a simple interface with a generic method:
public interface IProducerFactory
{
public IProducer<T> CreateProducer<T>();
}
Now I have a concrete implementation ProducerFactoryA, which works just fine. Even though BuilderA comes from an external library, I can easily create an instantiation implementing IProducer<T>:
public class ProducerFactoryA : IProducerFactory
{
public IProducer<T> CreateProducer<T>()
{
// some configuration ...
return new BuilderA<T>().Build();
}
}
For ProducerFactoryB, I need to use the external library BuilderB. However, BuilderB has type constraints, which would require me to add them to the interface IProducerFactory and basically everywhere where I want to use it:
public class ProducerFactoryB : IProducerFactory
{
public IProducer<T> CreateProducer<T>()
where T : ISomeConstraint, new()
{
// some configuration ...
return new BuilderB<T>().Build();
}
}
My question is, how can I avoid such a situation? I clearly don't want ISomeConstraint which comes from an external library to be part of my interface and propagate this type throughout my codebase. Furthermore, it would also break ProducerFactoryA.
Is there a way to check these constraints to satisfy the compiler without adding them on the method/interface level?
The IProducerFactory makes a promise that any implementation can produce any kind of object, without restrictions. So if you want restrictions you would need to propagate them or get rid of them.
An alternative would be to declare the generic type as part of the interface:
public interface IProducerFactory<T>
{
public IProducer<T> CreateProducer();
}
public class ProducerFactoryB : IProducerFactory<ISomeConstraint>
{
public IProducer<ISomeConstraint> CreateProducer()
{
...
}
}
This makes a much weaker promise, i.e. if you somehow get a producer factory for a specific type, it can create producers of that type. You could also mix the patterns, and have one interface that can create producers for any type, and one that can only create specific types.

How can I avoid constantly having to change an interface when adding new features to a system?

At my work, I'm trying to create more modular systems, as we tend to use similar mechanics in our games that have minor variances. To do this, I have been making use of interfaces, but have been getting stumped on certain problems, particularly ones relating to the addition of small features.
EXAMPLE:
Take for instance our evolution system. I have created the IEvolvable interface, which has a property for the evolution level and an Evolve() method.
public interface IEvolvable
{
int evolution { get; }
bool IncreaseEvolution(int numEvolutions);
}
I then have an implementation of this interface on a Character class, and based on some conditions via my Evolution handling class, I want to evolve my character.
public class EvolutionHandler
{
public IEvolvable evolvable;
public void TryEvolveCharacter
{
if(someCondition)
{
evolvable.IncreaseEvolution(1);
}
}
}
Then, later down the line, we say, we want the character to evolve based on level! Fantastic. We have an ILevellable interface which contains Level, xp, etc.
public interface ILevellable
{
int Level{ get; }
int MaxLevel{get;}
int XP {get;}
bool LevelUp(int numLevels);
}
We can use this data to control when evolution takes place based on the change in level. But here's my problem:
My evolve handler class interfaces with IEvolvable... not ILevellable... So what do I do?
I can have IEvolvable extend ILevellable or vice-versa... or I can create a new interface which extends IEvolvable and ILevellable. Now I also have to modify my evolve handler to accomodate for these changes.
But what happens if we don't want the evolve handler to take into consideration the Level anymore in our new game? Do use the old code? Was I supposed to extend my old code to include the Ilevellable interfacing?
public interface ILevelEvolver : ILevellable, IEvolvable
{
}
public class EvolutionHandler2
{
public ILevelEvolver levelEvolvable;
public void TryEvolveCharacter
{
if(levelEvolvable.Level > 10)
{
evolvable.IncreaseEvolution(1);
}
}
}
the key words are :
separate what varies from what stay the same
one of SOLID principles : open for extension closed for modification
finally in your case would use Strategy pattern :
public interface IEvilutionChecker{
bool AllowEvolution();
}
public class EvolutionCheckerA : IEvilutionChecker{
private ILevellable levelEvolvable;
public EvolutionCheckerA(ILevellable levelEvolvable){
this.levelEvolvable = levelEvolvable;
}
public bool AllowEvolution(){
return levelEvolvable.Level > 10;
}
}
public class EvolutionCheckerB : IEvilutionChecker{
private IEvolvable evolvable;
public EvolutionCheckerB(IEvolvable evolvable){
this.evolvable = evolvable;
}
public bool AllowEvolution(){
return someCondition;
}
}
public class EvolutionHandler2
{
public IEvolvable evolvable;
public IEvilutionChecker EvolutionChecker {get;set;};
public void TryEvolveCharacter
{
if(EvolutionChecker.AllowEvolution())
{
evolvable.IncreaseEvolution(1);
}
}
}
The interfaces should not extend each other. Keep them separated. Also you should keep concepts separated. By that, EvolutionHandler should only accept IEvolable.
In TryEvolveCharacter method, you can check if the property is a ILevelable.
Take a look at the code:
class EvolutionHandler
{
public IEvolable Evolable { get; set; }
public void TryEvolveCharacter()
{
if (Evolable is ILevelable levelable && levelable.Level > 10)
{
Evolable.IncreaseEvolution(1);
}
else if (someCondition)
{
Evolable.IncreaseEvolution(1);
}
}
}
so at the future, if a character extends ILevelable, that level will be considered, if not, someCondition take place.
Once you are running into these types of issues it becomes evident I think that OOP has limitations, or rather it makes some things too easy. That doesn't mean it should be scrapped entirely and something else adopted, there's a lot we can still use it for. What if rather than using the interface you make meaningful changes to directly you pass around a service interface that acts as an adapter to the internal interface.
public interface IEvolutionService {
TryEvolveCharacter(IEvolvable evolvable);
}
The concrete implementation can have something like
public void TryEvolveCharacter(IEvolvable evolvable){
if (evolvable.Level > 10) {
evolvable.IncreaseEvolution(1);
..Maybe do something new that the IEvolvable just exposed but without changing our consumed interface!
}
}
It does add code and things to make these but you have options there too, a single service can stand in for multiple interfaces, but then you are violating the Single Responsibility Principle in SOLID, and basically just making things more complex than they should in an effort at making it less complex.
You could make this a method on static class, although that interferes with testability, so I'd say refactoring and adding in a new service to handle things like service.TryEvolveCharacter(someIEvolvable). You'd still have to maintain the interface on your public facing service, but that could be more manageable than the raw interface with nothing abstracted in front of it.
I gave my answer to be as close to your question as possible, but to me it is still less than ideal. I would consider having immutable structs (which can have interfaces, and also stick to the L2 CPU cache) for the data and passing those to services (which would be pure functions, that is to say stateless, they only deal with what is passed in). If you are writing game code and performance is an issue then that's going to be very useful.
If you were only using games as a metaphor maybe less so :)
A helpful article on structs, L2, and performance
In many cases, having an interface that includes members which would be meaningful for some implementations but not others can be a better pattern than trying to use different interfaces for different combinations of functionality. As a simple example, if Java or .NET had included in their basic enumerable interface a function to report a count if available, along with one to indicate if and how the count would be performed, then a wrapper class that concatenates two enumerations could efficiently report how many elements were in the combined enumeration if the constituent enumerations supported a count function, and could also let clients know whether its count function would be efficient and/or cacheable.
Another pattern that can be useful is for an interface to include asXX function which a class may implement as either returning a reference to itself (if it supports XX functionality) or constructing a wrapper object of suitable type. If XX is a wrapper-class type, functionality may be added to the wrapper class without having to change the interface that includes the asXX member or implementations thereof.

Using a public method of derived class that is not in interface definition

New to OOP here. I have defined an interface with one method, and in my derived class I defined another public method. My client code is conditionally instantiating a class of the interface type, and of course the compiler doesn't know about the method in one of the derived classes as it is not part of the underlying interface definition. Here is what I am talking about:
public interface IFileLoader
{
public bool Load();
}
public class FileLoaderA : IFileLoader
{
public bool Load();
//implementation
public void SetStatus(FileLoadStatus status)
{
//implementation
}
}
public class FileLoaderB : IFileLoader
{
public bool Load();
//implementation
//note B does not have a SetStatus method
}
public enum FileLoadStatus
{
Started,
Done,
Error
}
// client code
IFileLoader loader;
if (Config.UseMethodA)
{
loader = new FileLoaderA();
}
else
{
loader = new FileLoaderB();
}
//does not know about this method
loader.SetStatus (FileStatus.Done);
I guess I have two questions:
What should I be doing to find out if the object created at run-time has the method I am trying to use? Or is my approach wrong?
I know people talk of IOC/DI all the time. Being new OOP, what is the advantage of using an IOC in order to say, "when my app asks
for an IFileLoader type, use concrete class x", as opposed to simply
using an App.Config file to get the setting?
Referring to your two questions and your other post I'd recommend the following:
What should I be doing to find out if the object created at run-time has the method I am trying to use? Or is my approach wrong?
You don't necessarily need to find out the concrete implementation at runtime in your client code. Following this approach you kinda foil the crucial purpose of an interface. Hence it's rather useful to just naïvely use the interface and let the concrete logic behind decide what's to do.
So in your case, if one implementation's just able to load a file - fine. If your other implementation is able to the same and a bit more, that's fine, too. But the client code (in your case your console application) shouldn't care about it and just use Load().
Maybe some code says more than thousand words:
public class ThirdPartyLoader : IFileLoader
{
public bool Load(string fileName)
{
// simply acts as a wrapper around your 3rd party tool
}
}
public class SmartLoader : IFileLoader
{
private readonly ICanSetStatus _statusSetter;
public SmartLoader(ICanSetStatus statusSetter)
{
_statusSetter = statusSetter;
}
public bool Load(string fileName)
{
_statusSetter.SetStatus(FileStatus.Started);
// do whatever's necessary to load the file ;)
_statusSetter.SetStatus(FileStatus.Done);
}
}
Note that the SmartLoader does a bit more. But as a matter of separation of concerns its purpose is the loading part. The setting of a status is another class' task:
public interface ICanSetStatus
{
void SetStatus(FileStatus fileStatus);
// maybe add a second parameter with information about the file, so that an
// implementation of this interface knows everything that's needed
}
public class StatusSetter : ICanSetStatus
{
public void SetStatus(FileStatus fileStatus)
{
// do whatever's necessary...
}
}
Finally your client code could look something like the follwing:
static void Main(string[] args)
{
bool useThirdPartyLoader = GetInfoFromConfig();
IFileLoader loader = FileLoaderFactory.Create(useThirdPartyLoader);
var files = GetFilesFromSomewhere();
ProcessFiles(loader, files);
}
public static class FileLoaderFactory
{
public static IFileLoader Create(bool useThirdPartyLoader)
{
if (useThirdPartyLoader)
{
return new ThirdPartyLoader();
}
return new SmartLoader(new StatusSetter());
}
}
Note that this is just one possible way to do what you're looking for without having the necessity to determine IFileLoader's concrete implementation at runtime. There maybe other more elegant ways, which furthermore leads me to your next question.
I know people talk of IOC/DI all the time. Being new OOP, what is the advantage of using an IOC [...], as opposed to simply using an App.Config file to get the setting?
First of all separating of classes' responsibility is always a good idea especially if you want to painlessly unittest your classes. Interfaces are your friends in these moments as you can easily substitute or "mock" instances by e.g. utilizing NSubstitute. Moreover, small classes are generally more easily maintainable.
The attempt above already relies on some sort of inversion of control. The main-method knows barely anything about how to instantiate a Loader (although the factory could do the config lookup as well. Then main wouldn't know anything, it would just use the instance).
Broadly speaking: Instead of writing the boilerplate factory instantiation code, you could use a DI-Framework like Ninject or maybe Castle Windsor which enables you to put the binding logic into configuration files which might best fit your needs.
To make a long story short: You could simply use a boolean appSetting in your app.config that tells your code which implementation to use. But you could use a DI-Framework instead and make use of its features to easily instantiate other classes as well. It may be a bit oversized for this case, but it's definitely worth a look!
Use something like:
if((loader as FileLoaderA) != null)
{
((FileLoaderA)loader).SetStatus(FileStatus.Done);
}
else
{
// Do something with it as FileLoaderB type
}
IoC is normally used in situations where your class depends on another class that needs to be setup first, the IoC container can instantiate/setup an instance of that class for your class to use and inject it into your class usually via the constructor. It then hands you an instance of your class that is setup and ready to go.
EDIT:
I was just trying to keep the code concise and easy to follow. I agree that this is not the most efficient form for this code (it actually performs the cast twice).
For the purpose of determining if a particular cast is valid Microsoft suggests using the following form:
var loaderA = loader as FileLoaderA;
if(loaderA != null)
{
loaderA.SetStatus(FileStatus.Done);
// Do any remaining FileLoaderA stuff
return;
}
var loaderB = loader as FileLoaderB
if(loaderB != null)
{
// Do FileLoaderB stuff
return;
}
I do not agree with using is in the if. The is keyword was designed to determine if an object was instantiated from a class that implements a particular interface, rather than if a cast is viable. I have found it does not always return the expected result (especially if a class implements multiple interfaces through direct implementation or inheritance of a base class).

c# dependency injection with interfaces and hiding internal

I am trying to refactor some classes in a project to make them testable using interfaces and dependency injection. But I struggle with the following:
public interface IInterfaceA
{
void SomePublicMethod();
}
public class ConcreteObject : IInterfaceA
{
public void SomePublicMethod() { ... }
public void SomeOhterMethod() { ... }
public void YetAnotherMethod() { ... }
}
public class AnotherConcreteObject
{
private IInterfaceA _myDependency;
public AnotherConcreteObject( IInterfaceA myDependency )
{
_myDependency=myDependency;
}
}
So far everything is fine, pretty standard code. AnotherConcreteObject needs to call SomeOtherMethod, but I don't want other classes (e.g. in a different assembly) to be able to call SomeOtherMethod. So externally SomePublicMethod should be visible, but SomeOtherMethod should not be. Only instances of AnotherConcreteObject should be able to call SomeOtherMethod. SomeOtherMethod will e.g. set a internal property which is used later by YetAnotherMethod to determine what should happen. The internal property is set to a default value in all other cases e.g. when YetAnotherMethod is called from any other class than AnotherConcretObject.
When not using interfaces, this is possible because AnotherConcreteObject is in the same assembly as ConcreteObject so it has access to internal properties and methods. Classes in a different assembly can not set this property or call the method because they don't have access to internal properties and methods.
There are a couple of possible solutions, depending on what exactly you are doing:
1 if SomePublicMethod is public, but SomeOtherMethod is internal, then don't put them in the same class and they likely do very different things and so the separation of concerns principle comes in to play.
2 If ConcreteObject doesn't have state and doesn't cause side effects, or if you aren't going to run tests against it in parallel, ie has unit behaviour, then it may not need mocking, so access it directly.

OOD, inheritance, and Layer Supertype

I have a question concerning holding common code in a base class and having the derived class call it, even though the derived class's trigger method has been dispatched from the base. So, base->derived->base type call stack.
Is the following look OK, or does it smell? I have numbered the flow steps...
public abstract class LayerSuperType
{
public void DoSomething() // 1) Initial call from client
{
ImplementThis(); // 2) Polymorphic dispatch
}
protected abstract void ImplementThis();
protected void SomeCommonMethodToSaveOnDuplication(string key) // 4)
{
Configuration config = GetConfiguration(key);
}
}
public class DerivedOne : LayerSuperType
{
protected virtual void ImplementThis() // 2)
{
SomeCommonMethodToSaveOnDuplication("whatever"); // 3) Call method in base
}
}
public class DerivedTwo : LayerSuperType
{
protected virtual void ImplementThis() // 2)
{
SomeCommonMethodToSaveOnDuplication("something else"); // 3) Call method in base
}
}
That looks absolutely fine. Perfect example of why you'd use an abstract class over an interface. It's a bit like a strategy pattern and I have used this fairly regularly and successfully.
Make sure that what the class doing is still dealing with one 'concern' though, only doing one task. If your base class does repository access but the objects are representing documents, don't put the functionality in the base class, use a separate repository pattern/object.
Looks like a very simplified Template Method Pattern where your sub-classes do some specific kinds of things at the right points in the implementation of your algorithm, but the overall flow is directed by a method on the base class. You've also provided some services to your sub-classes in the form of base class methods; that's ok too as long as you're good as far as SOLID goes.
Why not public abstract void DoSomething() and forget about ImplementThis() altogether?
The only reason I can see to leave ImplementThis() is if you want to maintain a consistent interface with DoSomething() which later on down the road will allow the signature of ImplementThis() to change without a breaking change to callers.
I agree that you should maintain a single concern with the class's responsibility but from an overall OOP perspective this looks fine to me. I've done similar on many occasions.
It does smell a little that SomeCommonMethodToSaveOnDuplication is being called in two different ways. It seems to be doing two unrelated things. Why not have two methods?

Categories