I have a solution with 2 projects.
One, Raven, is a simple base that provides data for the second project, PPather, to do stuff with. The second project depends on the first to compile so to build it, I add a reference to Raven. All works well so far.
Now I want Raven to launch PPather. But it can't see the PPather naemspace so I can't. All efforts to resolve this lead to circular reference errors.
Anyone know how I can get Raven to see the namespace of the PPather project that depends on it?
You can't - there is no way to reference assemblies in a circular manner like you want to do. Most likely you have not properly designed these assemblies if you need to create a circular reference.
Your first assembly is a dependency so there should not be any code in there that knows about anything other than its dependencies. Once your assemblies become "smart" and begin to have knowledge of anything outside their own dependencies you will begin to have serious maintenance and scalability headaches. I would look into reorganizing your code in such a manner that you do not need to create the circular reference.
As Andrew says, you can't and it doesn't make much sense that you'd want to.
Basically, do one of the following:
Merge the assemblies; if they really inter-depend tightly, then they really should not be separate in the first place.
Re-design the assemblies so that they do not directly depend on each other in both directions; for instance, make assembly A depend on an interface defined in assembly C, and have assembly B implement this interface (both depend on C).
There is a ton of stuff you can do to achieve this if you are not willing to combine them into one component. All basically strive to either invert one of the dependencies or to create a third component on which both depend.
It seems that Raven is the starting point, so one possible solution is to create a base class or interface in the PPather component which reflects the feature set that PPather seeks in Raven. Raven can then implement this base class and then include a "this"-pointer when instantiating/invoking PPather. PPather will expect a pointer to the base class (or interface) in his own assembly, and therefore will never "know of" Raven except through his own abstraction. Therefore, the circular dependency will be broken (by means of dependency injection).
It is fortunate that you can not add circular references - because they cause maintenance nightmares.
You want Raven to launch PPather? Is PPather as console/windows application? Use Process.Start to do that (and store the location of PPather in the registry somewhere).
Alternatively create interfaces for the classes that you need out of PPather - and make the classes in PPather implement those interfaces.
interface IPPatherInterface // Inside of Raven.
{
void Foo();
}
class PPatherClass : IPPatherInterface // Inside of PPather
{
// ...
}
class SomeRavenClass // Static maybe? Inside of Raven
{
void SupplyPPatherClass(IPPatherInterface item) { ... }
}
You now have a way for PPather to supply that interface's implementation to Raven.
Branch out the calsses in raven that Panther needs to use from raven to a different assembly, and have both panther and Raven reference them.
Although to be honest if Raven needs to run panther then i think your design is a bit Off. you should break off your code into something more manageable.
Related
To preface this, it is ultimately likely that we will end up injecting the majority of classes implemented in the assembly but for the purpose of this question I am only interested in a single class.
A colleague is in the process of implementing a new service following a clean architecture pattern and we were discussing the initial solution/project layout. I noted that there were a number of projects that were not sat under the infrastructure project although they were responsible for dealing with calling external APIs. In my mind, unless we get to a level of complexity that dictates splitting these into separate projects, then infrastructure code should remain in a single project. There were 2 additional projects that each contained a single class which implemented the interface defined in core.
The only argument that he provided that I felt might have some influence was on the subject of DI. The comment he made was that if the class was contained within a large assembly then there would be an increased overhead injecting that class into whatever process needed it when compared to loading the same class from an assembly that only contained that class.
Essential, what I am looking to understand is that if Assembly A contained classes A,B,C and D and I only injected class A, would the whole assembly be loaded before instantiating class A.
This got me thinking as to whether that was a valid argument or potentially while a valid point, the overhead as so negligible that you might as well ignore it.
How could I demonstrate what the difference, if any, is between the 2 scenarios?
You only want to include what should be included and nothing more, you don't ever want to include an entire assembly just because you wanted to use a single class or a couple of functions.
This will not only violate things like single responsibility and the concept of modularity, but it will cause you many more issues down the line if you start doing things like this. Right now it might seem like a small overhead if it's "just one class in just two projects", but that quickly becomes 10 classes in 4 projects with 100 dependencies.
I'm not exactly clear on what you're specifically trying to do but any project/service/module/component/class should import and include only what's needed for its own functionality and nothing more. If you're using DI it should inject the exact resource that's required, nothing more. There are very few edge case scenarios where that rule can be broken, i.e. if it causes significant company costs or if there's some other valid reason for violating it.
(I don't know how your assembly libraries work so my answer is purely based on general design.)
I would appreciate it if someone would explain to me how .NET references work when a .dll is compiled.
I have two .dll-s, my primary application.dll references my services.dll. The purpose of the services.dll is to provide a decoupled layer for communication with third party services so that changes to the integrations do not affect the application directly.
To achieve this decoupling I have inherited the services primary object exposing and using the new object in the main application:
public class CustomClient : ServiceClient_v1
{
public CustomClient(binding, address) : base (binding, address) {}
}
However, I am finding that when ServiceClient_v1 gets updated to ServiceClient_v2 and I try and just update the services.dll then my application.dll blows up saying:
Could not load type "ServiceClient_v1" from assembly services.dll
So it is still hanging onto a direct reference to that other object that I am trying to hide. I assume this is by design and simply something to do with compilation that I do not understand.
Is there a way to achieve what I want? And why is my method not working?
Since you're deriving CustomClient from ServiceClient_v1 in your application.dll, it will only work with the older version of your services.dll that contains the definition of ServiceClient_v1. As Lasse Vågsæther Karlsen pointed out, the ServiceClient_v1 class becomes part of the public declaration of CustomClient.
I believe you would benefit from applying Dependency Injection and the Liskov substitution principle in your application.
In order to achieve your goal of a truly interchangeable services.dll you need to refactor your architecture removing the dependency of services.dll from application.dll, it should be the other way around.
Define an Interface for your ServiceClient type. Both CustomClient and ServiceClient_v1 must implement this interface.
When you later update the code to use ServiceClient_v2, it should also implement the interface, which will be unchanged. Now everything continues to work without needing to re-complile the application.dll project.
Alternatively, don't rename the ServiceClient type in services.dll when moving from v1 to v2. This is what version control systems like git or SVN or are for.
How to version abstractions in .Net when applying Dependency Inversion in a high code-reuse environment
I am interested in shifting toward using Dependency Inversion in .Net, but have come across something that puzzles me.
I don’t believe it is tied to a particular method or provider of DIP, but more a fundamental issue that perhaps others have solved. The issue I'm solving for is best laid out step-by-step as scenario below.
Assumption / Restriction
A considerable assumption or restriction to put out there up front, is that my development team has stuck with a rule of keeping our deployed assemblies to one and only one Assembly Version, specifically version “1.0.0.0”.
Thus far, we have not supported having more than this one Assembly Version of any given assembly we’ve developed deployed on a server for the sake of simplicity. This may be limiting, and there may be many good reasons to move away from this, but never the less, it is currently a rule we work with. So with this practice in mind, continue below.
Scenario
You have an IDoStuff interface contained in an abstraction assembly
Stuff.Abstractions.dll with 2 methods.
You compile component A.dll
with a class explicitly implementing IDoStuff with 2 methods.
You move A.dll to production use, Assembly Version 1.0.0.0, Assembly File
version 1.0.0.0.
You move Interface.dll to prod, Assembly Version
1.0.0.0, Assembly File version 1.0.0.0.
Everything works fine. Time passes by.
You add another method (“DoMoreStuff” for example) to the IDoStuff interface so that a different Component B can call it.
(Keeping Interface Segregation OO principle in mind, let’s say the DoMoreStuff method makes sense to be in this relatively small IDoStuff interface.)
You now have IDoStuff with 3 methods in Stuff.Abstractions.dll, and you’ve built Component B to use the new 3rd method.
You move Stuff.Abstractions.dll to production use (upgrade it), Assembly Version 1.0.0.0, Assembly File Version 1.0.0.1.
(note that the file version is incremented, but the assembly version and therefore the strong name stays the same)
You move B.dll to production use, Assembly Version 1.0.0.0, Assembly File version 1.0.0.17.
You don’t do a thing to A.dll. You figure there are no changes needed at this time.
Now you call code that attempts to execute A.dll on the same production server where it had been working before. At runtime the Dependency Inversion framework resolves the IDoStuff interface to a class inside A.dll and tries to create it.
Problem is that class in A.dll implemented the now extinct 2-method IDoStuff interface. As one might expect, you will get an exception like this one:
Method ‘DoMoreStuff’ in type ‘the IDoStuff Class inside A.dll’ from assembly ‘strong name of assembly A.dll’ does not have an implementation.
I am presented with two ways that I can think of to deal with this scenario when I’d have to add a method to an existing interface:
1) Update every functionality-providing assembly that uses Stuff.Abstractions.dll to have an implementation of the new ‘DoMoreStuff’ method.
This seems like doing things the hard way, but in a brute-force way would painfully work.
2) Bend the Assumption / Restriction stated above and start allowing more than one Assembly Version to exist (at least for abstraction definition assemblies).
This would be a bit different, and make for a few more assemblies on our servers, but it should allow for the following end state:
A.dll depends on stuff.abstractions.dll, Assembly Version 1.0.0.0, Assembly File Version 1.0.0.22 (AFV doesn’t matter other than identifying the build)
B.dll depends on stuff.abstractions.dll, Assembly Version 1.0.0.1, Assembly File Version 1.0.0.23 (AFV doesn’t matter other than identifying the build)
Both happily able to execute on the same server.
If both versions of stuff.abstractions.dll are installed on the server, then everything should get along fine. A.dll should not need to be altered either. Whenever it needs mods next, you’d have the option to implement a stub and upgrade the interface, or do nothing. Perhaps it would be better to keep it down to the 2 methods it had access to in the first place if it only ever needed them.
As a side benefit, we’d know that anything referencing stuff.abstractions.dll, version 1.0.0.0 only has access to the 2 interface methods, whereas users of 1.0.0.1 have access to 3 methods.
Is there a better way or an accepted deployment pattern for versioning abstractions?
Are there better ways to deal with versioning abstractions if you’re trying to implement a Dependency Inversion scheme in .Net?
Where you have one monolithic application, it seems simple since it’s all contained – just update the interface users and implementers.
The particular scenario I’m trying to solve for is a high code-reuse environment where you have lots of components that depend on lots of components. Dependency Inversion will really help break things up and make Unit Testing feel a lot less like System Testing (due to layers of tight coupling).
Part of the problem may be that you're depending directly on interfaces which were designed with a broader purpose in mind. You can mitigate the problem by having your classes depend on abstractions which were created for them.
If you define interfaces as needed to represent the dependencies of your classes rather than depending on external interfaces, you'll never have to worry about implementing interface members that you don't need.
Suppose I'm writing a class that involves an order shipment, and I realize that I'm going to need to validate the address. I might have a library or a service that performs such validations. But I wouldn't necessarily want to just inject that interface right into my class, because now my class has an outward-facing dependency. If that interface grows, I'm potentially violating the Interface Segregation Principle by depending on an interface I don't use.
Instead, I might stop and write an interface:
public interface IAddressValidator
{
ValidationResult ValidateAddress(Address address);
}
I inject that interface into my class and keep writing my class, deferring writing an implementation until later.
Then it comes time to implement that class, and that's when I can bring in my other service which was designed with a broader intent than just to service this one class, and adapt it to my interface.
public class MyOtherServiceAddressValidator : IAddressValidator
{
private readonly IOtherServiceInterface _otherService;
public MyOtherServiceAddressValidator(IOtherServiceInterface otherService)
{
_otherService = otherService;
}
public ValidationResult ValidateAddress(Address address)
{
// adapt my address to whatever input the other service
// requires, and adapt the response to whatever I want
// to return.
}
}
IAddressValidator exists because I defined it to do what I need for my class, so I never have to worry about having to implement interface members that I don't need. There won't ever be any.
There's always the option to version the interfaces; e.g., if there is
public interface IDoStuff
{
void GoFirst();
void GoSecond();
}
There could then be
public interface IDoStuffV2 : IDoStuff
{
void GoThird();
}
Then ComponentA can reference IDoStuff and ComponentB can be written against IDoStuffV2. Some people frown on interface inheritance, but I don't see any other way to easily version interfaces.
Specific Question:
How can I unit Test my DI configuration against my codebase to ensure that all the wiring up still works after I make some change to the automated binding detection.
I've been contributing to a small-ish codebase (maybe ~10 pages? and 20-30 services/controllers) which uses Ninject for Ioc/DI.
I've discovered that in the Ninject Kernel it is configured to BindDefaultInterface. That means that if you ask it for an IFoo, it will go looking for a Foo class.
But it does that based on the string pattern, not the C# inheritance. That means that MyFoo : IFoo won't bind, and you could also get other weird "coincidental" bindings, maybe?
It all works so far, because everyone happens to have called their WhateverService interface IWhateverService.
But this seems enormously brittle and unintuitive to me. And it specifically broke when I wanted to rename my live FilePathProvider : IFilePathProvider to be AppSettingsBasedFilePathProvider (as opposed to the RootFolderFilePathProvider, or the NCrunchFilePathProvider which get used in Test) on the basis of that telling you what it did :)
There are a couple of alternative configurations:
BindToDefaultInterfaces (note plural) which will bind MyOtherBar to IMyOtherBar, IOtherBar & IBar (I think)
BindToSingleInterface works if every class implements exactly 1 interface.
BindToAllInterfaces does exactly what it sounds like.
I'd like to change to those, but I'm concerned about introducing obscure bugs whereby some class somewhere stops binding in the way that it should, but I don't notice.
Is there any way to test this / make this change with a reasonable amount of safety (i.e. more than "do it and hope", anyway!) without just trying to work out how to excercise EVERY possible component.
So, I managed to solve this...
My solution is not without its drawbacks, but it does fundamentally achieve the safety I wanted.
Summary
Roughly speaking there are 2 aspects:
Programmatically Test that every binding that the DI Kernel knows about can be resolved cleanly.
Programmatically Test that every relevant Interface used in your codebase can be resolved cleanly.
Both take roughly the same path:
Refactor your DI configuration code, so that the core portion of it that defines bindings for the meat of your app can be run in isolation from the rest of the Startup Code.
At the start of your Test invoke the above DI config code, so that you have a replica of the kernel object that your site uses, whose bindings you can test
perform some amount of Reflection, to generate a list of the relevant Type objects which the kernel should be able to provide.
(optional) filter that list to ignore some classes and interfaces that you know your tests don't need concern themselves about (e.g. your code doesn't need to worry about whether the Kernel knows how to bootstrap itself, so it can ignore any Bindings it has in the namespace belonging to your DI framework.).
Then loop over the Interface type objects you have left and ensure that kernel.Get(interfaceType) runs without an Exception for each one.
Read on for more of the Gory details...
Validating all defined Kernel Bindings
This is going to be specific to the DI framework in question, but for Ninject it's pretty hairy...
It would be much nicer if a Ninject kernel had a built-in way to expose its collection of Bindings, but alas it doesn't. But the bindings collection is available privately, so if you perform the correct Reflection incantations you can get hold of them. You then have to do some more Reflection to convert its Binding objects into {InterfaceType : ConcreteType} pairs.
I'll post the minutiae of how to extract these objects from Ninject separately, since that is orthogonal to the question of how to set up tests for DI config in general. {#Placeholder for a link to that#}
Other DI Frameworks may make this easier by providing these collections more publicly (or even by providing some sort of Validate() method directly.)
Once you have a list of the interface that the kernel thinks it can bind, just loop over them and test out resolving each one.
Details of this will vary by Language and Testing Framework, but I use C# and FluentAssertions, so I assigned Action resolutionAction = (() => testKernel.Get(interfaceType)) and asserted resolutionAction.ShouldNotThrow() or something very similar.
Validating all relevant interfaces in your codebase
The first half is all very well, but all it tells you is that the Bindings that you DI has picked up are well-defined. It doesn't tell you whether any Bindings are entirely missing.
You can cover that case by collecting all of the interesting Assemblies in your codebase:
Assembly.GetAssembly(typeof(Main.SampleClassFromMainAssembly))
Assembly.GetAssembly(typeof(Repos.SampleRepoClass))
Assembly.GetAssembly(typeof(Web.SampleController))
Assembly.GetAssembly(typeof(Other.SampleClassFromAnotherSeparateAssemblyInUse))
Then for each Assembly reflect over its classes to find the public Interfaces that it exposes, and ensure that each of those can be resolved by the kernel.
You've got a couple of issues with this approach:
What if you miss an Assembly, or someone adds a new Assembly, but doesn't add it to the tests?
This isn't directly a problem, but it would mean your tests don't protect you as well as you think. I put in a safety net test, to assert that every Assembly that the Ninject Kernel knows about should be in this list of Assemblies to be tested. If someone adds a new Assembly, it will likely contain something that is provided by the kernel, so this safety-net test will fail, bringing the developers attention to this test class.
What about classes that AREN'T provided by the kernel?
I found that mainly these classes were not provided for a clear reason - maybe they're actually provided by Factory classes, or maybe the class is badly used and is manually constructed. Either way these classes were a minority and could be listed as explicit exceptions ("loop over all classes; if classname = foo then ignore it.") relatively painlessly.
Overall, this is moderately hairy. And is more fragile that I'd generally like tests to be.
But it works.
It might be something that you write before making the change, solely so that you can run it once before your change, once after the change to check that nothing's broken and then delete it?
I've been working on a personal project which, beyond just making something useful for myself, I've tried to use as a way to continue finding and learning architectural lessons. One such lesson has appeared like a Kodiak bear in the middle of a bike path and I've been struggling quite mightily with it.
The problem is essentially an amalgam of issues at the intersection of dependency injection, assembly decoupling and implementation hiding (that is, implementing my public interfaces using internal classes).
At my jobs, I've typically found that various layers of an application hold their own interfaces which they publicly expose, but internally implement. Each assembly's DI code registers the internal class to the public interface. This technique prevents outside assemblies from newing-up an instance of the implementation class. However, some books I've been reading while building this solution have spoken against this. The main things that conflict with my previous thinking have to do with the DI composition root and where one should keep the interfaces for a given implementation. If I move dependency registration to a single, global composition root (as Mark Seemann suggests), then I can get away from each assembly having to run its own dependency registrations. However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them). As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it. As an example, here is a diagram he provided, and, for contrast, a diagram for how I would normally implement the same solution (okay, these aren't quite the same; kindly focus on the arrows and notice when implementation arrows cross assembly boundaries instead of composition arrows).
Martin Style
What I've normally seen
I immediately saw the advantage in Martin's diagram, that it allows the lower assemblies to be swapped out for another, given that it has a class that implements the interface in the layer above it. However, I also saw this seemingly major disadvantage: If you want to swap out the assembly from an upper layer, you essentially "steal" the interface away that the lower layer is implementing.
After thinking about it for a little bit, I decided the best way to be fully decoupled in both directions would be to have the interfaces that specify the contract between layers in their own assemblies. Consider this updated diagram:
Is this nutty? Is it right on? To me, it seems like this solves the problem of interface segregation. It doesn't, however, solve the problem of not being able to hide the implementation class as internal. Is there anything reasonable that can be done there? Should I not be worried about this?
One solution that I'm toying around with in my head is to have each layer implement the proxy layer's interface twice; once with a public class and once with an internal class. This way, the public class could merely wrap/decorate the internal class, like this:
Some code might look like this:
namespace MechanismProxy // Simulates Mechanism Proxy Assembly
{
public interface IMechanism
{
void DoStuff();
}
}
namespace MechanismImpl // Simulates Mechanism Assembly
{
using MechanismProxy;
// This class would be registered to IMechanism in the DI container
public class Mechanism : IMechanism
{
private readonly IMechanism _internalMechanism = new InternalMechanism();
public void DoStuff()
{
_internalMechanism.DoStuff();
}
}
internal class InternalMechanism : IMechanism
{
public void DoStuff()
{
// Do whatever
}
}
}
... of course, I'd still have to address some issues regarding constructor injection and passing the dependencies injected into the public class to the internal one. There's also the problem that outside assemblies could possibly new-up the public Mechanism... I would need a way to ensure only the DI container can do that... I suppose if I could figure that out, I wouldn't even need the internal version. Anyway, if anyone can help me understand how to overcome these architectural problems, it would be mightily appreciated.
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
Unless you are building a reusable library (that gets published on NuGet and gets used by other code bases you have no control over), there is typically no reason to make classes internal. Especially since you program to interfaces, the only place in the application that depends on those classes is the Composition Root.
Also note that, in case you move the abstractions to a different library, and let both the consuming and the implementing assembly depend on that assembly, those assemblies don't have to depend on each other. This means that it doesn't matter at all whether those classes are public or internal.
This level of separation (placing the interfaces in an assembly of its own) however is hardly ever needed. In the end it's all about the required granularity during deployment and the size of the application.
As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it.
This is the Dependency Inversion Principle, which states:
In a direct application of dependency inversion, the abstracts are owned by the upper/policy layers
This is a somewhat opinion based topic, but since you asked, I'll give mine.
Your focus on creating as many assemblies as possible to be as flexible as possible is very theoretical and you have to weigh the practical value vs costs.
Don't forget that assemblies are only a container for compiled code. They become mostly relevant only when you look at the processes for developing, building and deploying/delivering them. So you have to ask a lot more questions before you can make a good decision on how exactly to split up the code into assemblies.
So here a few examples of questions I'd ask beforehand:
Does it make sense from your application domain to split up the
assemblies in this way (e.g. will you really need to swap out
assemblies)?
Will you have separate teams in place for developing those?
What will the scope be in terms of size (both LOC and team sizes)?
Is it required to protect the implementation from being available/visible? E.g. are those external interfaces or internal?
Do you really need to rely on assemblies as a mechanism to enforce your architectural separation? Or are there other, better measures (e.g. code reviews, code checkers, etc.)?
Will your calls really only happen between assemblies or will you need remote calls at some point?
Do you have to use private assemblies?
Will sealed classes help instead enforcing your architecture?
For a very general view, leaving these additional factors out, I would side with Martin Fowler's diagram, because that is just the standard way of how to provide/use interfaces. If your answers to the questions indicate additional value by further splitting up/protecting the code that may be fine, too. But you'd have to tell us more about your application domain and you'd have to be able to justify it well.
So in a way you are confronted with two old wisdoms:
Architecture tends to follow organizational setups.
It is very easy to over-engineer (over-complicate) architectures but it is very hard to design them as simple as possible. Simple is most of the time better.
When coming up with an architecture you want to consider those factors upfront otherwise they'll come and haunt you later in the form of technical debt.
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
That doesn't sound like a downside. Implementation classes, that are bound to abstractions in your Composition Root, could probably be used in an explicit way somewhere else for some other reasons. I don't see any benefit from hiding them.
I would need a way to ensure only the DI container can do that...
No, you don't.
Your confusion probably stems from the fact that you think of the DI and Composition Root like there is a container behind, for sure.
In fact, however, the infrastructure could be completely "container-agnostic" in a sense that you still have your dependencies injected but you don't think of "how". A Composition Root that uses a container is your choice, as good choice as possible another Composition Root where you manually compose dependencies. In other words, the Composition Root could be the only place in your code that is aware of a DI container, if any is used. Your code is built agaist the idea of Dependency Inversion, not the idea of Dependency Inversion container.
A short tutorial of mine can possibly shed some light here
http://www.wiktorzychla.com/2016/01/di-factories-and-composition-root.html