Multistep composition using MEF in C# - c#

I have a rather complex application which is initialized in multiple steps or phases. Some components are created during construction, some when user context is available, some when frond end gets available. I want to use MEF to create an easy extensible initialization process.
My question now... is is possible to have a MEF compose in multiple steps? Some imports can be satisfied, but some others only after e.g. user context is available within second composition.

If I understand you correctly you want to compose in an initial step and want to use the results from this initial step in the following ones. If this is what you mean by
...is it possible to have a MEF compose in multiple steps? ...
you can look into this thread stackoverflow.com and continue with the MSDN for CompositionBatch.
Otherwise if your object tree can be initialized anytime, you can just call Container.GetExportedValue() with the type you need in your process.

Related

Design Pattern for C# and invoking different methods based on a string parameter

So I am currently about to develop a rather cool library in which I will pull data from an Excel Worksheet or Sharepoint List and then use WatiN and .NET with C# to execute various browser commands for automated UI testing. However, I am running into a big design issue in trying to encapsulate the changing requirements that future commands or tests may need to be generated. Currently, I have about 5 unique actions that I need to perform based on a command parameter (stored as a string in the Excel or Sharepoint List), but I would like to easily make the number of commands extensible as well as perform validation to ensure no bad commands. Can anyone point me in the right direction for Design Patterns that might help me implement this efficiently and robustly rather than just writing one giant switch statement in a HandleCommand() function? Thanks for helping a new programmer out! =D
Look at the command pattern to encapsulate the commands and use the factory pattern to create the instance of the command object based on its name. The factory can use reflection to determine which command to create based on the text.
I agree that Builder and Factory Method make sense here. You probably don't want to use the inheritance-based version of Factory Method as described in the "Design Patterns" book by Gamma and co. Just use a static factory method that takes the name of the Command class to instantiate.

Command pattern and complex operations in C#

I am writing a program in C# that needs to support undo/redo. For this purpose, I settled on the Command pattern; tldr, every operation that manipulates the document state must be performed by a Command object that knows about the previous state of the document as well as the changes that need to be made, and is capable of doing/undoing itself.
It works fine for simple operations, but I now have an operation that affects several parts of the document at once. Likewise, the Command object must be smart enough to know all the old state it needs to preserve in case it needs to be undone.
The problem is exposing all that state using public interfaces has the potential for misuse if someone attempts to call the interface directly, which can lead to state corruption. My insticts tell me the most OO way of doing this is to expose specialized Command classes -- rather than allowing you to directly manipulate the state of the document, all you can do is ask the document to create a Command object which has access to its internal state and is guaranteed to know enough to properly support undo/redo.
Unfortunately, C# doesn't support the concept of friends, so I can't create a Command class that has access to document internals. Is there a way to expose the private members of the document class to another class, or is there some other way to do what I need without having to expose a lot of document internals?
It depends, if you are deploying a library your Document could declare 'internal' methods to interact with it's internal state, these methods would be used by you Command class, internal methods are limited to the assembly they are compiled.
Or you could nest a private class to your Document, that way allowing it to access Document's internal state and expose a public interface to it, your Document would then create a command class hidden by that interface.
First, C# has the internal keyword that declares "friend" accessibility, which allows public access from within the entire assembly.
Second, the "friend" accessibility can be extended to a second assembly with an assembly attribute, InternalsVisibleTo, so that you could create a second project for your commands, and yet the internals of the document will stay internal.
Alternatively, if your command objects are nested inside the document class, then they will have access to all its private members.
Finally, complex commands could also simply clone the document before making changes. That is an easy solution, albeit not very optimized.
You could always access fields and properties, private or not, through reflection (Type.GetField(string, BindingFlags.Private) & friends).
Maybe with a custom attribute on the class (or the field/property) to automate the process of grabbing enough state for each Command?
Instead of having a command doing changes at different places of the document, you could have two dummy commands that mark the start and end of multi-step operations. Let us call them BeginCommand and EndCommand. First, you push the BeginCommand on the undo-stack, and then you perform the different steps as single commands, each of them doing a change at a single place of the document only. Of cause, you push them on the undo-stack as well. Finally, you push the EndCommand on the undo-stack.
When undoing, you check whether the command popped from the undo stack is the EndCommand. If it is, you continue undoing until the BeginCommand is reached.
This turns the multi-step command into a macro-command delegating the work to other commands. This macro-command itself is not pushed on the undo stack.

Creating a Silverlight library with dependecies composed via MEF

I have a Silverlight 4 library L which has a dependency that is to be provided at run-time via a plugin P.
I am using a DeploymentCatalog along the lines of the example provided by MEF documentation and all is well: the XAP of the plugin P is correctly downloaded asynchronously and the import is satisfied.
However, I cannot control the details on the Silverlight application A that will be using library L and I cannot exclude that A itself might want to use MEF: therefore it's possible that at some point A might issue a CompositionHost.SatisfyImports(...) CompositionHost.Initialize(catalog) call for its own purposes which I understand can only be invoked once.
Am I missing something here or partitioning the application across multiple XAPs can only be achieved if one has complete control of the Silverlight application and libraries?
Stefano
CompositionHost.SatisfyImports can be called many times. CompositionHost.Initialize can only be called once. As a library, it is not a good idea to call that method because the application may do so. Since you need to create and use a DeploymentCatalog, it's probably better if you don't use CompositionHost at all in your library, since you want to avoid calling the Initialize method, which would be the way to hook the CompositionHost to the DeploymentCatalog.
You can create your own CompositionContainer hooked up to the DeploymentCatalog and call GetExports or SatisfyImports on the container you created. CompositionHost is pretty much just a wrapper around a static CompositionContainer.
It's not usually a good idea to tie yourself to a single dependency injection container in a library, instead you'd usually want to abstract that away using something like the CommonServiceLocator, which leaves the choice of IoC container a preference of whoever is consuming your library.
I only started with MEF in Silverlight a month ago, so I'm definitely not an authority.
The first thing I noticed is that CompositionHost.SatisfyImports has been replaced with CompositionInitializer.SatisfyImports .
Second I could not find any reference to "SatisfyImports can only be invoked once"
My scenario is the following:
I have a BL xap which I use/link to from my application
The BL has some Imports that will be satisfied by calling SatisfyImports from the Application
The BL also has some imports that
cannot/will not be resolved until a
certain custom (third party)
module/xap will be loaded (loaded
when demand that is). When the custom
module becomes available (is loaded)
I solve the missing imports with an
extra call to
CompositionInitializer.SatisfyImports:
E.g:
If DomainSpecificModuleLogic Is Nothing Then
'this is required to trigger recomposition and resolve imports to the ThirdPartyModule
System.ComponentModel.Composition.CompositionInitializer.SatisfyImports(Me)
End If
So I have multiple calls to SatisfyImports (at different moments in time) and no problems due to this -> you do not required control over the whole application, just make sure that when someone accesses an object from your library that uses MEF, you have a call to SatisfyImports
Note: my BL is a singleton, so for sure I am calling SatisfyImports on the same object multiple times.

MEF - use the same plugins several times

I've read MEF documentation on Codeplex and I'm trying to figure out how to accomplish my task:
I would like to build an application framework that has standard components that can be used to do some common work (like displaying a list of records from a database). Plugins should be reused many times with different configuration each time. (eg. I have 5 windows in an application where I display record lists, each with different type of entity, different columns, each one should have it's own extension points like for displaying record details that should be satisfied with a different copy of another common plugin).
Is MEF suitable for such a scenario? How should I define contracts? Should I use metadata? Can I define relationships using configuration files?
Yes, you can use MEF. MEF supports NonShared instantiation of objects using the PartCreationPolicy attribute:
[PartCreationPolicy(CreationPolicy.NonShared)]
More information on this here.
Personally I'd do the wiring and configuration after the importing of the component on the target. However I am not sure how generic you want your application to be, if you are making a 'framework' to do certain solutions in I can imagine you want the configuration to be separate. You can go all-over-board and make an ISuperDuperGridConfiguration and import these on the constructor [ImportingConstructor] of your grid plugin. From within your target (where the grids get imported) set the location of the grid to the grid plugin (like main grid, side grid) and use the data stored in ISuperDuperGridConfiguration to further config the grid plugin itself.
However, you can easily go 'too far' with MEF, depending on your goals. We have a completely MEF componentized UI for an application with customized needs for every single customer. Sometimes I have the urge to put single buttons from the ribbon in a MEF extension.
As you can see, depending on your needs, you can and sometimes will go too far.
I don't think you'd need metadata especially in your case, but maybe someone else can share a different opinion on this ;-).
I hope this answers your question, if not please comment so I can highlight more aspects. All in all using MEF has been very positive for us, and we are using it far beyond a 'hello world' so to say. So at least you have that!

How can I implement DI/IoC for a repeated and variable process without creating kernels on demand?

I know, this probably wins the award for longest and most confusing question title. Allow me to explain...
I am trying to refactor a long-running batch process which, to be blunt, is extremely ugly right now. Here are a few details on the hard specifications:
The process must run several times (i.e. in the tens of thousands);
Each instance of the process runs on a different "asset", each with its own unique settings;
A single process consists of several sub-processes, and each sub-process requires a different slice of the asset's settings in order to do its job. The groups are not mutually exclusive (i.e. some settings are required by multiple sub-processes).
The entire batch takes a very long time to complete; thus, a single process is quite time-sensitive and performance is actually of far greater concern than design purity.
What happens now, essentially, is that for a given asset/process instance, a "Controller" object reads all of the settings for that asset from the database, stuffs them all into a strongly-typed settings class, and starts each sub-process individually, feeding it whichever settings it needs.
The problems with this are manifold:
There are over 100 separate settings, which makes the settings class a ball of mud;
The controller has way too much responsibility and the potential for bugs is significant;
Some of the sub-processes are taking upwards of 10 constructor arguments.
So I want to move to a design based on dependency injection, by loosely grouping settings into different services and allowing sub-processes to subscribe to whichever services they need via constructor injection. This way I should be able to virtually eliminate the bloated controller and settings classes. I want to be able to write individual components like so:
public class SubProcess : IProcess
{
public SubProcess(IFooSettings fooSettings, IBarSettings barSettings, ...)
{
// ...
}
}
The problem, of course, is that the "settings" are specific to a given asset, so it's not as simple as just registering IFooSettings in the IoC. The injector somehow has to be aware of which IFooSettings it's supposed to use/create.
This seems to leave me with two equally unattractive options:
Write every single method of IFooSettings to take an asset ID, and pass around the asset ID to every single sub-process. This actually increases coupling, because right now the sub-processes don't need to know anything about the asset itself.
Create a new IoC container for each full process instance, passing the asset ID into the constructor of the container itself so it knows which asset to grab settings for. This feels like a major abuse of IoC containers, though, and I'm very worried about performance - I don't want to go and implement this and find out that it turned a 2-hour process into a 10-hour process.
Are there any other ways to achieve the design I'm hoping for? Some design pattern I'm not aware of? Some clever trick I can use to make the container inject the specific settings I need into each component, based on some kind of contextual information, without having to instantiate 50,000 containers?
Or, alternatively, is it actually OK to be instantiating this many containers over an extended period of time? Has anybody done it with positive results?
MAJOR EDIT
SettingsFactory: generates various Settings objects from the database on request.
SubProcessFactory: generates subprocesses on request from the controller.
Controller: iterates over assets, using the SettingsFactory and SubProcessFactory to create and launch the needed subprocesses.
Is this different than what you're doing? Not really from a certain angle, but very much from another. Separating these responsibilities into separate classes is important, as you've acknowledged. A DI container could be used to improve the flexibility of both Factory pieces. The implementation details are, in some ways, less critical than improving the design, because once the design is improved, implementation can vary more readily.

Categories