How to execute action whenever MEF instantiates (exports) new object? - c#

I'm building a modular app with MEF, and been trying to come up with smart ideas how handle saving/loading states.
In certain situations (eg when user clicks "save"), my shell will have to trigger some sort of save/load action, which plugins may want to be aware of to be able to save and load their own states.
There are a lot of possible ways of course including events and a global message bus, however my preferred idea at the moment is based on two interfaces:
public interface ISaveAndLoadState
{
void SaveState(XmlWriter writer);
void LoadState(XmlReader reader);
}
public interface IStateManager
{
void Register(ISaveAndLoadState item);
void Save(Stream stream);
void Load(Stream stream);
}
Then plugins - their modules, view models or anything - could do the following:
[Export]
public class iAmAPluginViewModelOrModule : ISaveAndLoadState
{
[ImportingConstructor]
public iAmAPluginViewModelOrModule(IStateManager m)
{
m.Register(this);
}
public void SaveState(XmlWriter writer) { ..... }
public void LoadState(XmlReader reader) { ..... }
}
This should work reasonably well. However, I think it'd be even nicer if classes that implement ISaveAndLoadState wouldn't have to call IStateManager.Register() explicitly - but rather that when MEFs instantiates a class implementing ISaveAndLoadState, it automatically registers it with the IStateManager.
So basically, I'd need an "event" triggering whenever MEF instantiates any new object, and then do something like
public void OnMefHasCreatedInstance(object instance)
{
var _inst = instance as ISaveAndLoadState;
if(_inst != null)
Container.GetExportedValue<IStateManager>().Register(_inst);
}
Is that possible at all? Is there any way to listen to / be informed to when MEF has created a new instsance?

This isn't quite answering your question but it is possibly another solution for you.
If you export each class with the ISaveAndLoadState interface, then you can use an ImportMany within your StateManager
[Export(typeof(ISaveAndLoadState))]
public class iAmAPluginViewModelOrModule : ISaveAndLoadState
public class StateManager : IStateManager
{
[ImportMany(typeof(ISaveAndLoadState))]
private List<ISaveAndLoadState> _saveAndLoadStates;
}
Then _saveAndLoadStates should be populated with all of your objects and you can just loop through them in the StateManager Load and Save methods.

Related

Akka.NET and MVVM

I am playing around with using Akka.NET in a new WPF .NET Framework application I am currently working on.
Mostly the process of using actors in your application seems pretty self explanitory, however when it comes to actually utilising the actor output at the application view level I have gotten a bit stuck.
Specifically there appear to be two options on how you might handle receiving and processing events in your actor.
Create an actor with publically exposed event handlers. So maybe something like this:
public class DoActionActor : ReceiveActor
{
public event EventHandler<EventArgs> MessageReceived;
private readonly ActorSelection _doActionRemoteActor;
public DoActionActor(ActorSelection doActionRemoteActor)
{
this._doActionRemoteActor = doActionRemoteActor ?? throw new ArgumentNullException("doActionRemoteActor must be provided.");
this.Receive<GetAllStuffRequest>(this.HandleGetAllStuffRequestReceived);
this.Receive<GetAllStuffResponse>(this.HandleGetAllStuffResponseReceived);
}
public static Props Props(ActorSystem actorSystem, string doActionRemoteActorPath)
{
ActorSelection doActionRemoteActor = actorSystem.ActorSelection(doActionRemoteActorPath);
return Akka.Actor.Props.Create(() => new DoActionActor(doActionRemoteActor));
}
private void HandleGetAllStuffResponseReceived(GetAllTablesResponse obj)
{
this.MessageReceived?.Invoke(this, new EventArgs());
}
private void HandleGetAllStuffRequestReceived(GetAllTablesRequest obj)
{
this._doActionRemoteActor.Tell(obj, this.Sender);
}
}
So basically you can then create your view and invoke any calls by doing something like this _doActionActor.Tell(new GetStuffRequest()); and then handle the output through the event handler. This works well but seems to break the 'Actors 'everywhere' model' that Akka.NET encourages and I am not sure about the concurrency implications from such an approach.
The alternative appears to be to actually make it such that my ViewModels are actors themselves. So basically I have something that looks like this.
public abstract class BaseViewModel : ReceiveActor, IViewModel
{
public event PropertyChangedEventHandler PropertyChanged;
public abstract Props GetProps();
protected void RaisePropertyChanged(PropertyChangedEventArgs eventArgs)
{
this.PropertyChanged?.Invoke(this, eventArgs);
}
}
public class MainWindowViewModel : BaseViewModel
{
public MainWindowViewModel()
{
this.Receive<GetAllTablesResponse>(this.HandleGetAllTablesResponseReceived);
ActorManager.Instance.Table.Tell(new GetAllTablesRequest(1), this.Self);
}
public override Props GetProps()
{
return Akka.Actor.Props.Create(() => new MainWindowViewModel());
}
private void HandleGetAllTablesResponseReceived(GetAllTablesResponse obj)
{
}
}
This way I can handle actor events directly in actors themselves (which are actually my view models).
The problem I run into when trying to do this is correctly configuring my Ioc (Castle Windsor) to correctly build Akka.NET instances.
So I have some code to create the Akka.NET object that looks like this
Classes.FromThisAssembly()
.BasedOn<BaseViewModel>()
.Configure(config => config.UsingFactoryMethod((kernel, componentModel, context) =>
{
var props = Props.Create(context.RequestedType);
var result = ActorManager.Instance.System.ActorOf(props, context.RequestedType.Name);
return result;
}))
This works great at actually creating an instance of IActorRef BUT unfortunately I cannot cast the actor reference back to the actual object I need (in this case BaseViewModel).
So if I try to do this return (BaseViewModel)result; I get an invalid cast exception. Which obviously makes sense because I am getting an IActorRef object not a BaseViewModel.
So in conclusion I am hoping to get two questions answered.
What is the best way to deal with Akka.NET actors in MVVM applications, specifically when it comes to handling messages received and handling displaying the output.
Is there a way to correctly configure my Ioc system to both create an IActorRef instance and add it to the system BUT return an instance of the actual parent actor object concrete implementation of BaseViewModel?
Below is the current solution that I am using in the hope someone might propose something a bit better.
Basically I have abandoned my attempt at making my views actors and currently settled on using an interface to communicate between the ViewModel and Actor.
The current solution looks like this:
public class MainWindowViewModel : BaseViewModel, ITableResponseHandler
{
public void HandleResponse(IEnumerable<Entity> allEntities) { }
}
public interface ITableResponseHandler
{
void HandleResponse(IEnumerable<Entity> allEntities);
}
public class MyActor : ReceiveActor
{
public MyActor(ITableResponseHandler viewModel)
{
this.Receive<GetAllEntitiesResponse>(this.HandleGetAllEntitiesResponseReceived);
}
private void HandleGetAllEntitiesResponseReceived(GetAllTablesResponse obj)
{
this._ViewModel.HandleTablesResponse(obj.Result);
}
}
While I don't feel this is ideal it basically lets me avoid all the extra complexity of trying to make my view models themselves actors while sufficently decoupling the actor from the view.
I hope someone else has faced this problem and might be able to provide some insight at a better solution for handling Akka.NET output in a MVVM application.

First experience of using interfaces in C#?

I have an interface:
interface ISqlite
{
void insert();
void update();
void delete();
void select();
}
And custom service class:
class SqliteService
{
public SQLiteDatabase driver;
public SqliteService() {
SqliteConnection(new SQLiteDatabase());
}
public void SqliteConnection(SQLiteDatabase driver)
{
this.driver = driver;
}
public void select(ISqlite select) {
select.select();
}
public void insert(ISqlite insert) {
insert.insert();
}
public void delete(ISqlite delete)
{
delete.delete();
}
}
And last class Pacients that realizes ISqlite interface:
class Pacients: ISqlite
{
public List<ClientJson> pacients;
public Pacients() {
this.pacients = new List<ClientJson>();
}
public void add(ClientJson data) {
this.pacients.Add(data);
}
public void insert()
{
throw new NotImplementedException();
}
/* Others methos from interface */
}
I try to use my code like as:
/* Create instance of service class */
SqliteService serviceSqlite = new SqliteService();
/* Create instance of class */
Pacients pacient = new Pacients();
pacient.add(client);
serviceSqlite.insert(pacient);
As you can see above I send object pacient that realizes interface ISqlite to service. It means that will be called method insert from object pacient.
Problem is that I dont understand how to add data in this method using external class: SQLiteDatabase()? How to get access to this.driver in service class from object pacient?
Edit 1
I think I must move instance of connection new SQLiteDatabase() to db inside Pacients class is not it?
Generally speaking, I would favor a solution where the data objects themselves don't know anything about how they're stored, i.e. they have no knowledge of the class that communicates with the database. Many ORMs do just that.
Of course it might not be easy depending on the specifics of your situation... Try to examine what your methods on each object actually need; generally speaking they need the values of properties, and what column each property corresponds to, right? So any external class can do this if it knows these bits of information. You can specify the name of the column with a custom attribute on each property (and if the attribute isn't there, the column must have the same name as the property).
And again, this is the most basic thing that ORMs (Object Relational Mappers) do, and in addition they also manage more complicated things like relationships between objects/tables. I'm sure there are many ORMs that work with SqlLite. If you're OK with taking the time to learn the specifics of an ORM, that's what I would recommend using - although they're not silver bullets and will never satisfy all possible requirements, they are in my opinion perfect for automating the most common day to day things.
More to the point of the question, you can of course make it work like that if you pass the SQLiteDatabase object to the methods, or keep it in a private field and require it in the constructor or otherwise make sure that it's available when you need it; there's no other simple solution I can think of. And like you pointed out, it implies a certain degree of coupling.
You can change the signature of interface's methods to pass an SQLiteDatabase object.
interface ISqlite
{
void insert(SQLiteDatabase driver);
void update(SQLiteDatabase driver);
void delete(SQLiteDatabase driver);
void select(SQLiteDatabase driver);
}
Example call from the service:
public void insert(ISqlite insert)
{
insert.insert(driver);
}
I think you can figure out the rest by yourself.

Some design-pattern suggestions needed

C#. I have a base class called FileProcessor:
class FileProcessor {
public Path {get {return m_sPath;}}
public FileProcessor(string path)
{
m_sPath = path;
}
public virtual Process() {}
protected string m_sath;
}
Now I'd like to create to other classes ExcelProcessor & PDFProcessor:
class Excelprocessor: FileProcessor
{
public void ProcessFile()
{
//do different stuff from PDFProcessor
}
}
Same for PDFProcessor, a file is Excel if Path ends with ".xlsx" and pdf if it ends with ".pdf". I could have a ProcessingManager class:
class ProcessingManager
{
public void AddProcessJob(string path)
{
m_list.Add(Path;)
}
public ProcessingManager()
{
m_list = new BlockingQueue();
m_thread = new Thread(ThreadFunc);
m_thread.Start(this);
}
public static void ThreadFunc(var param) //this is a thread func
{
ProcessingManager _this = (ProcessingManager )var;
while(some_condition) {
string fPath= _this.m_list.Dequeue();
if(fPath.EndsWith(".pdf")) {
new PDFProcessor().Process();
}
if(fPath.EndsWith(".xlsx")) {
new ExcelProcessor().Process();
}
}
}
protected BlockingQueue m_list;
protected Thread m_thread;
}
I am trying to make this as modular as possible, let's suppose for example that I would like to add a ".doc" processing, I'd have to do a check inside the manager and implement another DOCProcessor.
How could I do this without the modification of ProcessingManager? and I really don't know if my manager is ok enough, please tell me all your suggestions on this.
I'm not really aware of your problem but I'll try to give it a shot.
You could be using the Factory pattern.
class FileProcessorFactory {
public FileProcessor getFileProcessor(string extension){
switch (extension){
case ".pdf":
return new PdfFileProcessor();
case ".xls":
return new ExcelFileProcessor();
}
}
}
class IFileProcessor{
public Object processFile(Stream inputFile);
}
class PdfFileProcessor : IFileProcessor {
public Object processFile(Stream inputFile){
// do things with your inputFile
}
}
class ExcelFileProcessor : IFileProcessor {
public Object processFile(Stream inputFile){
// do things with your inputFile
}
}
This should make sure you are using the FileProcessorFactory to get the correct processor, and the IFileProcessor will make sure you're not implementing different things for each processor.
and implement another DOCProcessor
Just add a new case to the FileProcessorFactory, and a new class which implements the interface IFileProcessor called DocFileProcessor.
You could decorate your processors with custom attributes like this:
[FileProcessorExtension(".doc")]
public class DocProcessor()
{
}
Then your processing manager could find the processor whose FileProcessorExtension property matches your extension, and instantiate it reflexively.
I agree with Highmastdon, his factory is a good solution. The core idea is not to have any FileProcessor implementation reference in your ProcessingManager anymore, only a reference to IFileProcessor interface, thus ProcessingManager does not know which type of file it deals with, it just knows it is an IFileProcessor which implements processFile(Stream inputFile).
In the long run, you'll just have to write new FileProcessor implementations, and voila. ProcessingManager does not change over time.
Use one more method called CanHandle for example:
abstract class FileProcessor
{
public FileProcessor()
{
}
public abstract Process(string path);
public abstract bool CanHandle(string path);
}
With excel file, you can implement CanHandle as below:
class Excelprocessor: FileProcessor
{
public override void Process(string path)
{
}
public override bool CanHandle(string path)
{
return path.EndsWith(".xlsx");
}
}
In ProcessingManager, you need a list of processor which you can add in runtime by method RegisterProcessor:
class ProcessingManager
{
private List<FileProcessor> _processors;
public void RegisterProcessor(FileProcessor processor)
{
_processors.Add(processor)
}
....
So LINQ can be used in here to find appropriate processor:
while(some_condition)
{
string fPath= _this.m_list.Dequeue();
var proccessor = _processors.SingleOrDefault(p => p.CanHandle(fPath));
if (proccessor != null)
proccessor.Process(proccessor);
}
If you want to add more processor, just define and add it into ProcessingManager by using
RegisterProcessor method. You also don't change any code from other classes even FileProcessorFactory like #Highmastdon's answer.
You could use the Factory pattern (a good choice)
In Factory pattern there is the possibility not to change the existing code (Follow SOLID Principle).
In future if a new Doc file support is to be added, you could use the concept of Dictionaries. (instead of modifying the switch statement)
//Some Abstract Code to get you started (Its 2 am... not a good time to give a working code)
1. Define a new dictionary with {FileType, IFileProcessor)
2. Add to the dictionary the available classes.
3. Tomorrow if you come across a new requirement simply do this.
Dictionary.Add(FileType.Docx, new DocFileProcessor());
4. Tryparse an enum for a userinput value.
5. Get the enum instance and then get that object that does your work!
Otherwise an option: It is better to go with MEF (Managed Extensibility Framework!)
That way, you dynamically discover the classes.
For example if the support for .doc needs to be implemented you could use something like below:
Export[typeof(IFileProcessor)]
class DocFileProcessor : IFileProcessor
{
DocFileProcessor(FileType type);
/// Implement the functionality if Document type is .docx in processFile() here
}
Advantages of this method:
Your DocFileProcessor class is identified automatically since it implements IFileProcessor
Application is always Extensible. (You do an importOnce of all parts, get the matching parts and Execute.. Its that simple!)

Callback interface contract

I have two .NET parties who needs be bound by a contract. Now, party1 and party2 need to be able call some methods on each other (most of it is calls and reporting result back). I have duplex contract in mind, but the parties are not using WCF.
Is there a design pattern for this?
Edit
The parties are part of the same application. I create the application (party1) and someone else creates a dll (party2) that I load dynamically. Now, both of us should be able to call methods on each other. So, I am out to create an interface contract between us. The intent is to know whether there is a know pattern to do that?
A common solution is to use some kind of pub/sub pattern. By doing so you can avoid circular dependencies.
Basically you create some kind of class which are used to subscribe on events (and publish them).
So both your classes does something like this (but with different events):
public class ClassA : IEventHandler<UserCreated>
{
IEventManager _eventManager
public ClassA(IEventManager manager)
{
// I subscribe on this event (which is published by the other class)
manager.Subscribe<UserCreated>(this);
_eventManager = manager;
}
public void Handle(UserCreated theEvent)
{
//gets invoked when the event is published by the other class
}
private void SomeInternalMethod()
{
//some business logic
//and I publish this event
_eventManager.Publish(new EmailSent(someFields));
}
}
The event manager (simplified and not thread safe):
public class EventManager
{
List<Subscriber> _subscribers = new List<Subscriber>();
public void Subscribe<T>(IEventHandler<T> subscriber)
{
_subscribers.Add(new Subscriber{ EventType = typeof(T), Subscriber = subscriber});
}
public void Publish<T>(T theEvent)
{
foreach (var wrapper in subscribers.Where(x => x == typeof(theEvent)))
{
((IEventHandler<T>)wrapper.Subscriber).Handle(theEvent);
}
}
}
The small wrapper:
public class Subscriber
{
public Type EventType;
public object Subscriber;
}
Voila. the two classes are now loosely coupled from each other (while still being able to communicate with each other)
If you use an inversion of control container it get's easier since you can simplify the event manager and just use the container (service location) to resolve all subscribers:
public class EventManager
{
IYourContainer _container;
public EventManager(IYourContainer container)
{
_container = container;
}
public void Publish<T>(T theEvent)
{
foreach (var subscriber in _container.ResolveAll<IEventHandler<T>>())
{
subscriber.Handle(theEvent);
}
}
}
I think you can use next logic:
Class1: Interface1 , Class2:Interface2,
class Manager{
public Manager(Interface1 managedPart1,Interface2 managedPart2){
... some logic for connect to interfaces
}
}
This way reminds me pattern Bridge, but this is very subjective

should new behavior be introduced via composition or some other means?

I chose to expose some new behavior using composition vs. injecting a new object into my consumers code OR making the consumer provide their own implementation of this new behavior. Did I make a bad design decision?
I had new requirements that said that I needed to implement some special behavior in only certain circumstances. I chose to define a new interface, implement the new interface in a concrete class that was solely responsible for carrying out the behavior. Finally, in the concrete class that the consumer has a reference to, I implemented the new interface and delegate down to the class that does the work.
Here are the assumptions that I was working with...
I haven an interface, named IFileManager that allows implementors to manage various types of files
I have a factory that returns a concrete implementation of IFileManager
I have 3 implementations of IFileManager, these are (LocalFileManager, DfsFileManager, CloudFileManager)
I have a new requirements that says that I need to manage permissions for only the files being managed by the CloudFileManager, so the behavior for managing permissions is unique to the CloudFileManager
Here is the test that led me to the code that I wrote...
[TestFixture]
public class UserFilesRepositoryTest
{
public interface ITestDouble : IFileManager, IAclManager { }
[Test]
public void CreateResume_AddsPermission()
{
factory.Stub(it => it.GetManager("cloudManager")).Return(testDouble);
repository.CreateResume();
testDouble.AssertWasCalled(it => it.AddPermission());
}
[SetUp]
public void Setup()
{
testDouble = MockRepository.GenerateStub<ITestDouble>();
factory = MockRepository.GenerateStub<IFileManagerFactory>();
repository = new UserFileRepository(factory);
}
private IFileManagerFactory factory;
private UserFileRepository repository;
private ITestDouble testDouble;
}
Here is the shell of my design (this is just the basic outline not the whole shibang)...
public class UserFileRepository
{
// this is the consumer of my code...
public void CreateResume()
{
var fileManager = factory.GetManager("cloudManager");
fileManager.AddFile();
// some would argue that I should inject a concrete implementation
// of IAclManager into the repository, I am not sure that I agree...
var permissionManager = fileManager as IAclManager;
if (permissionManager != null)
permissionManager.AddPermission();
else
throw new InvalidOperationException();
}
public UserFileRepository(IFileManagerFactory factory)
{
this.factory = factory;
}
private IFileManagerFactory factory;
}
public interface IFileManagerFactory
{
IFileManager GetManager(string managerName);
}
public class FileManagerFactory : IFileManagerFactory
{
public IFileManager GetManager(string managerName)
{
IFileManager fileManager = null;
switch (managerName) {
case "cloudManager":
fileManager = new CloudFileManager();
break;
// other managers would be created here...
}
return fileManager;
}
}
public interface IFileManager
{
void AddFile();
void DeleteFile();
}
public interface IAclManager
{
void AddPermission();
void RemovePermission();
}
/// <summary>
/// this class has "special" behavior
/// </summary>
public class CloudFileManager : IFileManager, IAclManager
{
public void AddFile() {
// implementation elided...
}
public void DeleteFile(){
// implementation elided...
}
public void AddPermission(){
// delegates to the real implementation
aclManager.AddPermission();
}
public void RemovePermission() {
// delegates to the real implementation
aclManager.RemovePermission();
}
public CloudFileManager(){
aclManager = new CloudAclManager();
}
private IAclManager aclManager;
}
public class LocalFileManager : IFileManager
{
public void AddFile() { }
public void DeleteFile() { }
}
public class DfsFileManager : IFileManager
{
public void AddFile() { }
public void DeleteFile() { }
}
/// <summary>
/// this class exists to manage permissions
/// for files in the cloud...
/// </summary>
public class CloudAclManager : IAclManager
{
public void AddPermission() {
// real implementation elided...
}
public void RemovePermission() {
// real implementation elided...
}
}
Your approach to add your new behavior only saved you an initialization in the grand scheme of things because you to implemented CloudAclManager as separate from CloudFileManager anyways. I disagree with some things with how this integrates with your existing design (which isn't bad)...
What's Wrong With This?
You separated your file managers and made use of IFileManager, but you didn't do the same with IAclManager. While you have a factory to create various file managers, you automatically made CloudAclManager the IAclManager of CloudFileManager. So then, what's the point of having IAclManager?
To make matters worse, you
initialize a new CloudAclManager
inside of CloudFileManager every time you try to get its ACL
manager - you just gave factory
responsibilities to your
CloudFileManager.
You have CloudFileManager implement IAclManager on top of having it as a property. You just moved the rule that permissions are unique to CloudFileManager into your model layer rather than your business rule layer. This also results in supporting the unnecessary
potential of circular referencing between self and property.
Even if you wanted
CloudFileManager to delegate the
permission functionality to
CloudAclManager, why mislead other
classes into thinking that
CloudFileManager handles its own
permission sets? You just made your
model class look like a facade.
Ok, So What Should I Do Instead?
First, you named your class CloudFileManager, and rightly so because its only responsibility is to manage files for a cloud. Now that permission sets must also be managed for a cloud, is it really right for a CloudFileManager to take on these new responsibilities? The answer is no.
This is not to say that you can't have code to manage files and code to manage permissions in the same class. However, it would then make more sense for the class to be named something more general like CloudFileSystemManager as its responsibilities would not be limited to just files or permissions.
Unfortunately, if you rename your class it would have a negative effect on those currently using your class. So how about still using composition, but not changing CloudFileManager?
My suggestion would be to do the following:
1. Keep your IAclManager and create IFileSystemManager
public interface IFileSystemManager {
public IAclManager AclManager { get; }
public IFileManager FileManager { get; }
}
or
public interface IFileSystemManager : IAclManager, IFileManager {
}
2. Create CloudFileSystemManager
public class CloudFileSystemManager : IFileSystemManager {
// implement IFileSystemManager
//
// How each manager is set is up to you (i.e IoC, DI, simple setters,
// constructor parameter, etc.).
//
// Either way you can just delegate to the actual IAclManager/IFileManager
// implementations.
}
Why?
This will allow you to use your new behavior with minimal impact to your current code base / functionality without affecting those who are using your original code. File management and permission management can also coincide (i.e. check permissions before attempting an actual file action). It's also extensible if you need any other permission set manager or any other type of managers for that matter.
EDIT - Including asker's clarification questions
If I create IFileSystemManager : IFileManager, IAclManager, would the repository still use the FileManagerFactory and return an instance of CloudFileSystemManager?
No, a FileManagerFactory should not return a FileSystemManager. Your shell would have to update to use the new interfaces/classes. Perhaps something like the following:
private IAclManagerFactory m_aclMgrFactory;
private IFileManagerFactory m_fileMgrFactory;
public UserFileRepository(IAclManagerFactory aclMgrFactory, IFileManagerFactory fileMgrFactory) {
this.m_aclMgrFactory = aclMgrFactory;
this.m_fileMgrFactory = fileMgrFactory;
}
public void CreateResume() {
// I understand that the determination of "cloudManager"
// is non-trivial, but that part doesn't change. For
// your example, say environment = "cloudManager"
var environment = GetEnvMgr( ... );
var fileManager = m_fileMgrFactory.GetManager(environment);
fileManager.AddFile();
// do permission stuff - see below
}
As for invoking permission stuff to be done, you have a couple options:
// can use another way of determining that a "cloud" environment
// requires permission stuff to be done
if(environment == "cloudManager") {
var permissionManager = m_aclMgrFactory.GetManager(environment);
permissionManager.AddPermission();
}
or
// assumes that if no factory exists for the environment that
// no permission stuff needs to be done
var permissionManager = m_aclMgrFactory.GetManager(environment);
if (permissionManager != null) {
permissionManager.AddPermission();
}
I think that composition is exactly the right means to to this kind of trick. But I think you should keep it more simple (KISS) and just make an IAclManager property in the IFileManager and set it to null by default and set the SecurityManager implementation for the cloud service there.
This has different upsides:
You can check if permissions need to be checked by nullchecking the securityManager property. This way, if there doesn't need to be permissionsManaging done (as with localfile system), you don't have exceptions popping up. Like this:
if (fileManager.permissionsManager != null)
fileManager.permissionsManager.addPermission();
When you then carry out the task (to add or delete a file), you can check again if there's a permissionsManager and if the permission is given, if not throw exception (as you'll want to throw the exception when a permission to do an action is missing, not if a permission is missing in general if you're not going to add or delete files).
You can later on implement more IAclManagers for the other IFileManagers when your customer changes the requirements next time the same way as you would now.
Oh, and then you won't have such a confusing hierarchy when somebody else looks at the code ;-)
In general it looks good, but I do have a few suggestions. It seems that your CreateResume() method implementation demands a IFileManager that is also an IAclManager (or else it throws an exception).
If that is the case, you may want to consider adding an overload to your GetManager() method in which you can specify the interface that you require, and the factory can have the code that throws an exception if it doesn't find the right file manager. To accompolish this you can add another interface that is empty but implements both IAclManager and IFileManager:
public interface IAclFileManager : IFileManager, IAclManager {}
And then add the following method to the factory:
public T GetManager<T>(string name){ /* implementation */}
GetManager will throw an exception if the manager with the name given doesn't implement T (you can also check if it derives from or is of type T also).
All that being said, if AddPermissions doesn't take any parameters (not sure if you just did this for the post), why not just call AddPermissions() from CloudFileManager.AddFile() method and have it completely encapsulated from the user (removing the need for the new IAclManager interface)?
In any event, doesn't seem like a good idea to call AddFile in the CreateResume() method and only then throw the exception (since you now you have now created a file without the correct permissions which could be a security issue and also the consumer got an exception so he may assume that AddFile didn't succeed, as opposed to AddPermission).
Good luck!

Categories