MissingMethodException after extracting base interface - c#

I split an interface, inside a Nuget package library, into a simpler base interface (without one property from the original), and made the original derive from the new base interface.
Instantiation in consuming applications happens through Managed Extensibility Framework (MEF), using property injection with [Import] attributes, and implementations with [Export(typeof(IFooConfigurations))]
This shouldn't be a breaking change, for applications using the old interface and implementation. But in some cases, different libraries are loaded, which use old interface versions and implementations. This results in MissingMethodExceptions at runtime, saying a method or property (get method) does not exist - such as the Configurations list property in the example.
Old:
public interface IFooConfigurations
{
int ConfigurationsIdentifier { get; }
IReadOnlyList<Configuration> Configurations { get; }
}
New:
public interface IBaseFooConfigurations
{
// without the ConfigurationsIdentifier
IReadOnlyList<Configuration> Configurations { get; }
}
public interface IFooConfigurations : IBaseFooConfigurations
{
int ConfigurationsIdentifier { get; }
// Configurations inherited from IBaseFooConfigurations
}
Implementation (not changed)
[Export(typeof(IFooConfigurations)]
public class FooConfigurations : IFooConfigurations
{
// implementations of ConfigurationsIdentifier and Configurations
}
Usage (not changed), resolved through MEF
public class FooApplicationClass
{
[Import]
private IFooConfigurations ConfigurationsOwner { get; set; }
}
It is quite hard to track this error and find possible causes, because it doesn't occur in the usual development environment.
Could it be a solution, to replicate all the old properties and methods, which are now in the base interface, in the new version of the IFooConfigurations interface, with the new keyword, while still deriving from the new IBaseFooConfigurations?
Possible solution?
public interface IFooConfigurations : IBaseFooConfigurations
{
int ConfigurationsIdentifier { get; }
new IReadOnlyList<Configuration> Configurations { get; }
}
EDIT: It seems like keeping the members of the original interface, hiding the inherited ones with the "new" keyword, solved the problem. Probably, older applications and libraries, working with the original interface, couldn't resolve the inherited members as parts of the original interface. However, explicit implementations and mocks can potentially be troublesome with this. There is still testing to be done.

Interface members, inherited from another interface, are not equivalent to members, which are defined in the interface itself. Therefore, moving members to a base interface and inheriting from it, is a breaking change. To be downward compatible, the members of the interface must also be defined in itself, as "new" (in C#).
I confirmed this with a simple test program, referencing different builds of DLLs with the original single interface, the split-up and another with the split-up and duplicate "new" members. So it is not an issue of MEF.
Unfortunately, this problem only occurs at runtime, after a release of the nuget package has already been built.

Related

Allow subclass instantiation only on the assembly of the superclass in C#

Imagine the following scenario in a Xamarin solution:
Assembly A (PCL):
public abstract class MyBaseClass
{
public MyBaseClass()
{
[...]
}
[...]
}
Assembly B (3rd Party Library):
public class SomeLibClass
{
[...]
public void MethodThatCreatesClass(Type classType){
[...]
//I want to allow this to work
var obj = Activator.CreateInstance(classType);
[...]
}
[...]
}
Assembly C (Main project):
public class ClassImplA:MyBaseClass{
[...]
}
public class ClassImplA:MyBaseClass{
[...]
}
public class TheProblem{
public void AnExample(){
[...]
//I want to block these instantiations for this Assembly and any other with subclasses of MyBaseClass
var obj1 = new ClassImplA()
var obj2 = new ClassImplB()
[...]
}
}
How can I prevent the subclasses from being instantiated on their own assembly and allow them only on the super class and the 3rd Party Library (using Activator.CreateInstance)?
Attempt 1
I though I could make the base class with an internal constructor but then, I saw how silly that was because the subclasses wouldn't be able to inherit the constructor and so they wouldn't be able to inherit from the superclass.
Attempt 2
I tried using Assembly.GetCallingAssembly on the base class, but that is not available on PCL projects. The solution I found was to call it through reflection but it also didn't work since the result of that on the base class would be the Assembly C for both cases (and I think that's because who calls the constructor of MyBaseClass is indeed the default constructors of ClassImplA and ClassImplB for both cases).
Any other idea of how to do this? Or am I missing something here?
Update
The idea is to have the the PCL assembly abstract the main project (and some other projects) from offline synchronization.
Given that, my PCL uses its own DB for caching and what I want is to provide only a single instance for each record of the DB (so that when a property changes, all assigned variables will have that value and I can ensure that since no one on the main project will be able to create those classes and they will be provided to the variables by a manager class which will handle the single instantions).
Since I'm using SQLite-net for that and since it requires each instance to have an empty constructor, I need a way to only allow the SQLite and the PCL assemblies to create those subclasses declared on the main project(s) assembly(ies)
Update 2
I have no problem if the solution to this can be bypassed with Reflection because my main focus is to prevent people of doing new ClassImplA on the main project by simple mistake. However if possible I would like to have that so that stuff like JsonConvert.DeserializeObject<ClassImplA> would in fact fail with an exception.
I may be wrong but none of the access modifiers will allow you to express such constraints - they restrict what other entities can see, but once they see it, they can use it.
You may try to use StackTrace class inside the base class's constructor to check who is calling it:
public class Base
{
public Base()
{
Console.WriteLine(
new StackTrace()
.GetFrame(1)
.GetMethod()
.DeclaringType
.Assembly
.FullName);
}
}
public class Derived : Base
{
public Derived() { }
}
With a bit of special cases handling it will probably work with Activator class , but isn't the best solution for obvious reasons (reflection, error-prone string/assembly handling).
Or you may use some dependency that is required to do anything of substance, and that dependency can only be provided by your main assembly:
public interface ICritical
{
// Required to do any real job
IntPtr CriticalHandle { get; }
}
public class Base
{
public Base(ICritical critical)
{
if (!(critical is MyOnlyTrueImplementation))
throw ...
}
}
public class Derived : Base
{
// They can't have a constructor without ICritical and you can check that you are getting you own ICritical implementation.
public Derived(ICritical critical) : base(critical)
{ }
}
Well, other assemblies may provide their implementations of ICritical, but yours is the only one that will do any good.
Don't try to prevent entity creation - make it impossible to use entities created in improper way.
Assuming that you can control all classes that produce and consume such entities, you can make sure that only properly created entities can be used.
It can be a primitive entity tracking mechanism, or even some dynamic proxy wrapping
public class Context : IDisposable
{
private HashSet<Object> _entities;
public TEntity Create<TEntity>()
{
var entity = ThirdPartyLib.Create(typeof(TEntity));
_entities.Add(entity);
return entity;
}
public void Save<TEntity>(TEntity entity)
{
if (!_entities.Contains(entity))
throw new InvalidOperationException();
...;
}
}
It won't help to prevent all errors, but any attempt to persist "illegal" entities will blow up in the face, clearly indicating that one is doing something wrong.
Just document it as a system particularity and leave it as it is.
One can't always create a non-leaky abstraction (actually one basically never can). And in this case it seems that solving this problem is either nontrivial, or bad for performance, or both at the same time.
So instead of brooding on those issues, we can just document that all entities should be created through the special classes. Directly instantiated objects are not guaranteed to work correctly with the rest of the system.
It may look bad, but take, for example, Entity Framework with its gotchas in Lazy-Loading, proxy objects, detached entities and so on. And that is a well-known mature library.
I don't argue that you shouldn't try something better, but that is still an option you can always resort to.

C# Domain Specific Implementations: Unity and Generics

I have created components containing domain specific information in my application e.g. ImportManager, ExportManager etc
I'd like each component to operate as an isolated unit but I'm coming a little undone by my use of generics - when used with dependency inject (unity).
I have the following base object defined in a library.
public class ImportManager : IImportManager
{
[Dependency]
public IImportSettings Settings {get;set;}
}
The idea here being that I define a base class that implements standard functionality.
I then create a client-specific implementation which changes the standard behaviour slightly. This class has its own implementation and settings implemented in a different assembly as follows:
public class CustomImportManager : ImportManager, ICustomImportManager
{
}
The difference with this implementation is that I'd like to load ICustomSettings into the CustomImportManager - not ISettings.
I could just register the dependency in my bootstrapper and it would load fine but then I would have to cast the settings object every time I use it in CustomImportManager.
Alternatively, I could define a generic parameter on IImportManager:
public interface IImportManager<TSettings> where TSettings: ISettings
{
[Dependency]
public TSettings Settings {get; set}
}
Unfortunately this will require me to add the generic parameter to every class that defines this interface as a property leading to classes having masses of generic parameters.
In the example below, the facade could potentially implement 10+ components depending on its requirements meaning I'd have to define a TObject for every component - also making it very difficult to use itself.
public class Facade
{
[Dependency] IImportManager ImportManager {get; set; }
}
Does anybody have any ideas on how I may get around this?
Thanks in advance

How designing a method doomed to change

I have the following abstract class for some plugin:
public abstract class BasePlugin
{
public void SomeMethod(){..defaultBehaviour}.
}
This base class is going to be inherited from several (may be hundreds) of implementations.
I know that for sure, later I am going to change "in a small way" (but still changing) the behaviour of SomeMethod.
I would like the existing implementation of BasePlugin to continue behaving the same and the new one to use the new Feature.
Is there some pattern that allow me to do that ?
NB : I have the lead on all the implementations but I can not check for the hundreds of implementation if the new behaviour will be fine
Many patterns woud fit, but I'd say template method pattern is reasonable option assuming you want to keep main part of default behavior intact:
public abstract class BasePlugin
{
public void SomeMethod(){
// default code before/after one or many variations
// to be provided by derived classes
...
Variation(....);
...
}
public virtual Variation(....) {} // nothing by default
}
You don't want to do that. If you are writing a plugin system you have another option. And that's to expose a number of contracts to the plugins. Each contract represents a type of feature in your application that the plugin can extend.
In your plugin base class you'll define a register method:
public abstract class PluginBase
{
public abstract void Register(IFeatureRepository repos);
}
..which the plugins use to register their extensions:
public class TextProcessingFilter : PluginBase, ITextProcessor
{
public void Register(IFeatureRepository repos)
{
repos.Get<ITextEditor>().Subscribe(this);
}
void ITextProcessor.Process(TextEditorContext ctx)
{
}
}
The upside with that new features do not break backwards compatibility which is really important if you are going to have a lot of plugins. Simply introduce new interfaces in new versions of the base plugin dll.
I would go with the Strategy pattern (due to among other things, "Prefer composition over inheritance" - especially for clients who might already be inheriting from another class):
public interface IPluginStrategy
{
void SomeMethod();
}
public class OldPluginStrategy : IPluginStrategy
{
public void SomeMethod()
{
// old plugin code
}
}
public class NewPluginStrategy : IPluginStrategy
{
public void SomeMethod()
{
// new plugin code which might be using OldPuginStrategy
// through inheritance or composition
}
}
and for the clients:
public class Client
{
public Client(IPluginStrategy pluginStrategy)
{
...
}
// use pluginStrategy's SomeMethod
}
Here it is in action:
var oldClient = new Client(new OldPluginStrategy());
...
var newClient = new Client(new NewPluginStrategy());
you can control which clients use which strategy so old clients can keep using the old plugin code while other (new?) clients will get the new plugin.
You do need access to the clients' code or at least a way to make sure they choose the plugin you want them to use.
A word of caution - if your different plugins are essentially doing the same thing, and you only want to keep different versions in order to avoid the risk of breaking existing clients, please consider the following:
This is symptomatic of bad design of your plugin - your clients should not be aware of your plugin internals.
This will most likely be extremely hard to maintain in the long run - What happens after 30 changes you've made to the plugin? Will you have 30 different versions of the plugin running in 30 different clients?
Program to an interface, not an implementation.
When using a common interface (Plugin) for all abstract skeletal implementation classes, you can add a new abstract class (BasePluginNew) for implementing new (default) behavior.
New implementations inherit from this new abstract class BasePluginNew.
Old implementations continue behaving the same.
Clients refer to the common Plugin interface and are independent of how it is implemented.
For further discussion see the GoF Design Patterns Memory (Design Principles / Interface Design) at http://w3sdesign.com.

Make sure that target inherit some interface for custom attribute

I need to create some custom attributes, to be used for my reflection functions.
Here is the usecase, as I see it:
the user creates some class and marks it with my special attribute ([ImportantAttribute] for example)
then the user does something with functions from my library. Those functions find classes with [ImportantAttribute] and do something with them
The main problem is that functions in my library expects, that classes wich was marked with [ImportantAttribute] inherit my interface (IMyInterface for example)
Is there any way to let user know if he mark his class with [ImportantAttribute] and forget to inherit IMyInterface during compilation, not in run time. Some way to specify that this attribute is only for classes that inherit IMyInterface.
Same with attributes for properties and fields.
Is there any way to let user know if he mark his class with
[ImportantAttribute] and forget to inherit IMyInterface during
compilation, not in run time
Simple answer: no, this is not possible. Not at compile-time. You can check this at runtime though using reflection.
The best you could do with attributes at compile-time (except some special system attributes such as Obsolete but which are directly incorporated into the compiler) is specify their usage with the [AttributeUsage] attribute.
I've used the strategy you mention in a couple of the frameworks I've built with good success. One such example is for providing metadata to a plug-in infrastructure:
[AttributeUsage(AttributeTargets.Class, AllowMultiple=false, Inherited=false)]
public class PluginAttribute : Attribute
{
public string DisplayName { get; set; }
public string Description { get; set; }
public string Version { get; set; }
}
public interface IPlug
{
void Run(IWork work);
}
[Plugin(DisplayName="Sample Plugin", Description="Some Sample Plugin")]
public class SamplePlug : IPlug
{
public void Run(IWork work) { ... }
}
Doing so allows me to figure out information about plug-ins without having to instantiate them and read metadata properties.
In my experience in doing so, the only way I've found to enforce that both requirements are met is to perform runtime checks and make sure it is bold and <blink>blinking</blink> in the documentation. It is far from optimal but it is the best that can be done (that I've found). Then again I'm sure there is a better way to go about handling this but so far this has been pretty solid for me.

Why should I not use AutoDual?

Up to now, I've always decorated my .NET classes that I want to use from VB6 with the [AutoDual] attribute. The point was to gain Intellisense on .NET objects in the VB6 environment. However, the other day I googled AutoDual and the first answer is 'Do Not Use AutoDual'.
I've looked for coherent explanation of why I shouldn't use it, but could not find it.
Can someone here explain it?
I found a reliable way to both provide Intellisense for .NET objects in VB6, while at the same time not breaking the interface. The key is to mark each public method/property in the interface with DispatchID. Then the class must inherit from this interface - in the manner below.
[Guid("BE5E0B60-F855-478E-9BE2-AA9FD945F177")]
[InterfaceType(ComInterfaceType.InterfaceIsIDispatch)]
public interface ICriteria
{
[DispId(1)]
int ID { get; set; }
[DispId(2)]
string RateCardName { get; set; }
[DispId(3)]
string ElectionType { get; set; }
}
[Guid("3023F3F0-204C-411F-86CB-E6730B5F186B")]
[ClassInterface(ClassInterfaceType.None)]
[ProgId("MyNameSpace.Criteria")]
public class Criteria : ICriteria
{
public int ID { get; set; }
public string RateCardName { get; set; }
public string ElectionType { get; set; }
}
What the dispatch ID gives you is the ability to move around items in the class, plus you can now add new things to the class and not break the binary compatibility.
I think this sums it up:
Types that use a dual interface allow
clients to bind to a specific
interface layout. Any changes in a
future version to the layout of the
type or any base types will break COM
clients that bind to the interface. By
default, if the
ClassInterfaceAttribute attribute is
not specified, a dispatch-only
interface is used.
http://msdn.microsoft.com/en-us/library/ms182205.aspx
It increases the possibility that changing something in that class with the auto dual attribute will break someone else's code when the class is changed. If gives the consumer the ability to do something that will quite possibly cause them issues in the future.
The next option is ClassInterfaceType.AutoDual. This is the quick and dirty way to get early binding support as well (and make the methods show up in VB6 IntelliSense). But it's also easy to break compatibility, by changing the order of methods or adding new overloads. Avoid using AutoDual.
http://www.dotnetinterop.com/faq/?q=ClassInterface
I finally found the link that talks about what is going on with AutoDual and how it works:
http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/7fa723e4-f884-41dd-9405-1f68afc72597
The warning against AutoDual isn't the
fact that dual interfaces is bad but
the fact that it auto-generates the
COM interface for you. That is bad.
Each time the COM interface has to be
regenerated you'll get a new GUID and
potentially new members. If the GUID
changes then you get a brand new
interface/class as far as COM is
concerned. For early binding you'd
have to rebuild the clients each time
the interface was regenerated. The
preferred approach is to define the
COM class interface explicitly with a
GUID. Then all the early binding
clients can use the defined interface
and not worry about it changing on
them during development. That is why
the recommended option is None to tell
the CLR not to auto-generate it for
you. You can still implement the dual
interface though if you need it.

Categories