My question is about .NET 4.6.2, Winforms and C#. Let's assume we have complex system that uses several MDI WinForm instances with very complex UI structures (user controls, dock panels, tabcontrol,etc...). The request is to prepare some minimally invasive mechanism that will extend this application with layout customization feature. The user should be able to move/resize windows content and save it. I have already implemented this customization but I need to make it loaded and saved automatically without modyfying hundreds of OnLoad() and Dispose() methods. I can imagine it as follows:
All underlying customizable components must implement ILayoutSupport interface.
public interface ILayoutSupport
{
public void RestoreLayout()
public void SaveLayout();
}
public class MyUserControl: UserControl,ILayoutSupport {...}
Additionally, there is a deamon service LayoutController that listens all ILayoutSupport instance creations and it:
calls RestoreLayout each time new the instace is being created
calls SaveLayout when the instance is supposed to be diposed.
My questions are, is this approach "architecturarly" valid?
I tried to use CreateHandle event handler in my LayoutController service, but maybe there is better way to hook all specific types creations/disposings? I've considered to use C# attributes instead of interfaces and inject some routines into my windows, but I don't know how exactly it shoud go.
Or maybe there is better solution for my problem?
Thanks for all comments.
Related
I am developing my first bot using the MS Bot Framework and although I understand the basics, I am a bit clueless as to how to organize my code. For eg. I am planning to have
notifier
welcome prompt
very basic help response
I am using the Core template in Visual Studio and it comes with a Bots folder which has classes ending with Bot. Looking at some samples, it seemed to me that the bot handling logic needs to sit here. So, I decided to have 3 classes, all extending from ActivityHandler each doing one of the above tasks. Say I have 3 classes,
public class MyNotifierBot: ActivityHandler
{
// Constructor and overrides
}
public class WelcomeBot: ActivityHandler
{
// Constructor and overrides
}
public class ResponseBot: ActivityHandler
{
// Constructor and overrides
}
The first problem is that if I register all 3 classes as services.AddTransient<IBot, MyNotifierBot>() etc, I can only get the last registered bot in my controllers. Sure I can get a collection of the implementations in the controller and figure out the right one to use using reflection, it just feels wrong.
My question is, if this pattern is wrong and I should have a single class which extends from ActivityHandler and write my logic in seperate services. Or is there a better approach to this.
Edit: After thinking about this I am now wondering the existance of the Bots folder in the first place. If I am not meant to create multiple ActivityHandler subclasses for doing different things then what exactly is this structure for?
ActivityHandler implements IBot, so it can be thought of like a bot. Having multiple activity handlers would be like having multiple bots. Activity handlers are already designed to route different activity types to different code, so if routing is your concern then you only need one activity handler.
I presume your notifier is for proactive messaging. Rather than having a separate activity handler for it, what normally works is to have a separate endpoint, which is usually api/notify (as opposed to api/messages). You can still have a separate activity handler for that if you want, or not even use an activity handler for that case (like in the sample). Note that different channels may have special considerations for proactive messages, but that's outside the scope of your question.
Welcome messages are very easy with activity handlers. You can just use OnMembersAddedAsync in your one activity handler, and there's no need for a whole separate activity handler. Welcome messages are also channel-specific because they rely on conversation update activities, and not every channel has a well-defined way to know when a conversation starts before the user says anything. Here's a sample for if you're using Web Chat.
If you want multiple implementations of the same interface in your dependency injection then you'll need to identify them by the implementation rather than the interface, but keep in mind that you don't need to put them in dependency injection at all.
okay. So I want to build up a program, that acts as a core for "plugin"-modules.
Another developer could create a plugin1.dll and add it to the "modules" folder to enhance the functionality of my core-application.
So lets say my core has as example those functionalities:
Logging
User Authentification
User Interface
As example we have our core-application as mentioned above and someone wants to add a plugin that lets an user see the current time and log it into a standard-log.txt.
So he would create a class-library that has the functionality:
get the current time (functionality included in the .dll)
display the current time (functionality included in the .dll)
log the current time (functionality included in the core)
Now my problem is, I can invoke the functionalities of the plugin easily from my core-application using reflection, but how would I the other way around?
How can my plugin1.dll access and invoke the fully-set up logging-functionality of the core-program?
I hope you got my question. I want my plugin1.dll to be able to call as example Logging-methods of my core-class.
Thanks!
I suggest to separate your problem into two aspects:
How can the plugins know what functions are available in the core
application (at compile-time)?
How can they actually invoke those function (at run-time)?
An answer to the first aspect, is to create a separate interface-dll, that defines one (or more) interfaces that the core application will provide. Note that this should contain ONLY the interface definitions, no implementation. A plugin-developer can then import that dll, and program against that interface (without needing a dependency on your complete core implementation).
An answer to the second aspect: You could demand from your plugins to expose a well-known entry-point for initialization. In that method you could provide them a reference to your core implementation as an argument, so that they can store that reference and invoke methods on that interface as needed.
A simple example could look similar to the following:
Interface dll:
public interface ICoreApplication
{
//These are the methods that you want to provide to your plugins:
void LogMessage(string msg);
//void SomeOtherMethod(...)
//...
}
public interface IPlugin
{
//These are the methods that you expect from your plugins:
void Init(ICoreApplication coreReference);
}
(BTW: The IPlugin interface could also contain additional methods, if you alredy know the functionality that you expect your plugins to provide. In that case, you would not have to invoke your plugins via reflection, but via that interface.)
Core application:
public class Core : ICoreApplication
{
public void InitPlugins()
{
IPlugin somePlugin = ...; //retrieve via reflection
somePlugin.Init(this);
}
}
Note that this is just a simple example, to illustrate the basic concept. There is much more to providing a robust plugin-architecture. You need to think about things like
Security (Can you trust your plugins? Can you trust the file system from where you load them?)
Error-handling (What happens if a plugin throws an exception? What if it wants to notify you about an "expected failure"?)
Threading (If you invoke your plugins on your main thread, they can block your whole application. If you invoke them on some other thread, you need to think about synchronization. What if a plugin creates a new thread and invokes your core application on that thread?)
etc...
I've come into an issue that must be quite common, but with little insight around the world of Google.
You see, my project has 3 parts that I use:
CommunicationClass.cs (Asynchronous Socket Class)
Form1.Designer.cs (Containing the objects of Form1)
Form1.cs (Main constructor and contains event handlers for objects)
Pretty basic setup.
However, I don't know where I put my communication class instance. The communication class sends/receives messages. So, my instance of ComClass in Form1 would use its void Send() in the event handler for the enter key being pressed (while in a textBox).
That works fine. What doesn't work fine is when the ComClass RECEIVES a message. It can't use the non-static voids of PrintMessage() in Form1.cs, and PrintMessage can't be a static void because richTextBox1, where the messages are shown, is non-static.
I'm wondering if another component of C# will help me access these and overcome my problem, but I'm too new to C# to know. I want to keep using the layout I have rather than switch to one like an example TCP chat client, where the form is created outside of Program.cs.
In C#, the standard paradigm for stuff like this is to use events. This ties in with the idea of the Observer Pattern in software design.
You are already using that for handling the key-press. The "trick" is to implement an event on your CommClass that the Form instance can subscribe to, in order to receive notification of incoming data.
The usual .NET Forms implementation is usually a kind of "poor man's MVC", in which the Form class winds up acting as controller and view all at the same time. Of course, doing so negates the main benefit of an MVC design, which is that the view is completely independent of the controller.
But you could (after learning more about the MVC design pattern) create a third "controller" class that ties together the view (your Form) and the model (your CommClass where the actual meat of the work is implemented).
If you want to go really cheesy, you could just pass your Form instance directly to the CommClass and have some special method that the CommClass knows to call when it receives data. But that's just doubling-down on the failure to separate concerns between your class, tying them even more closely. Maybe okay for a quick-and-dirty proof of concept, but that's no way to write code that you have any interest in reusing some time in the future.
I'm creating a WPF application using the MVVM design pattern. I've only recently started learning both, but have a solid grasp on how the basics work.
The application will have classes that are not UI related, such as a networking thread and message handler, and a class to save and load settings.
These elements of the program don't have a clear connection with the UI. How should they be created and initialized? These are "application wide" services that will not fit a particular ViewModel, and don't feel like a Model either.
Is there a correct way to do this? What should "own" and create these objects? (The ViewModel, or rather make them static and create themselves?)
Here is a diagram of the MVVM model, with a few adjustments to show what I am looking for: (Highlighted text and purple box)
When a "user has joined" message is received the the server, the service will send an event to the model that has subscribed to it, notifying it of the new user. The ViewModel will see this change, and add the user's name to the UI.
You can have services that are linked to a certain functionality of a UI. (Only the main windows uses them, for example) And there can also be services that are shared between many windows.
For the first scenario, I usually instantiate the services in my ViewModels.
For application wide services, I'd rather create the instances in App.xaml.cs and pass the reference to my viewmodel.
Here is an example from one of my projects.
private void Application_Startup(object sender, StartupEventArgs e)
{
ConnectionManager connMan = new ConnectionManager();
MainViewModel mvm = new MainViewModel(connMan);
new MainWindow(mvm).ShowDialog();
// TODO: save settings, etc. here
this.Shutdown();
}
If your services do not rely on any state information, you could use static classes as well. And that is what I usually use for settings management, for example.
Edit: For the example you've posted, you have to ask yourself this question:
Who is responsible for creating and maintaining the network manager object?
If it is the ViewModel, it can host the object inside itself. If it is created by an external object, you would pass it to the ViewModel. There are pros and cons to either approach and I don't have enough information to suggest you one of them right now.
You can use a DI Container and register your services with it. It is then a matter of personal preferences if you use Dependency Injection or use the DI Container as a mere Service Locator.
The basic idea behind a service locator is to have an object that knows how to get hold of all of the services that an application might need. So simply speaking ServiceLocator is a singleton Registry.
The basic idea of the Dependency Injection is to have a separate object, an assembler, that populates a field in the lister class with an appropriate implementation.
A good implemantion is the Microsoft Unity Container. You can use it as an DI container or a Service Locator.
In this situation, try to keep a List (eg ObservablleCollection<T>) in the ViewModel, and model-specific data type like Person, User in Model.
Then create separate namespaces such as Workers, Helpers or Managers, which are static classes that are only responsible for their specific area. For example: Workers / Sql / SqlWorker, Workers / Network / NetworkWorker.
Later in the ViewModel, call these methods in the appropriate commands.
I think it would be a simple and advanced solution, since workers will not interfere with each other (if only via abstract interfaces), besides they will not be connection to the UI.
The Semantic Logging Application Block (SLAB) is very appealing to me, and I wish to use it in a large, composite application I am writing. To use it, one writes a class derived from 'EventSource', and includes one method in the class for each event they want to log as a typed event, vs. a simple string.
An application such as mine could have hundreds of such events. I could have an 'EventSource' based class with just one event, "SomethingHappened", and log everything through that, at the one extreme end of the effort and accuracy spectrum, and I could have one event for every operation I perform.
It strikes me as a good idea to have EventSource derivatives for different functional areas. The app has little to know business logic itself; that is all provided by MEF plugin modules, so I could have event sources for bootsrapping, security, config changes etc. and any plugin module can define an event source for whatever events it wants to log.
Is this a good strategy, or are many EventSource derived loggers an undesirable app feature?
From your question
... I wish to use it in a large, composite application I am writing...
I can deduce that large is meant in the context of a single developer. In that case you can derive from EventSource and add all events you possibly could want into that class.
It does not make much sense to create an extra EventSource derived class for every part of your composite application since it would pollute the eventsource registration database where already 2K of providers are registered. Besides that it would make it hard to enable logging for your application if you need to remember 20 guids you need to enable to follow your application logic through several layers.
A compromise would be to define in your EventSource class some generic event like
public void WriteViolation(string Subsystem, string Message, string Context)
where you have in your components a logger class for each component
public static class NetworkLogger
{
public static void Violation(string message)
{
GenericSource.Instance.Violation("Network", message, NetworkContext.Current);
}
}
public static class DatabaseLogger
{
public static void Violation(string message)
{
GenericSource.Instance.Violation("Database", message, DBContext.Current);
}
}
That way you can keep the loggers component specific and you can add e.g. automatically contextual information to the generic event when necesssary.
Another approach is to use in your application tracing where your trace method enter/leave, info, warning, error and your EventSource derived class knows only these events. When you add for every trace entry the type name + method name you can filter by namespaces and group by classes in WPA to see what you were doing. An example is shown in Semantic Tracing For .NET 4.0.
For a large application you can check out on your machine the file
C:\Windows\Microsoft.NET\Framework\v4.0.30319\CLR-ETW.man
You can open it with ecmangen.exe from Windows SDK to get a nice GUI to see how the events are structured. .NET has only two Event Providers defined. The many events are grouped via keywords to enable specific aspects of .NET e.g. GC, Loader, Exceptions, ....
This is important since you can pass while you enable a provider specific keywords to it to enable only some events of a large provider.
You can also check out Microsoft.Windows.ApplicationServer.Applications.45.man to find out how the Workflow guys think about ETW events. That should help to find your own way. It is not so much about how exactly you structure your events since the real test is finding production bugs at customer sites. The probability is high that you need to take several iterations until you have found the right balance to log/trace relevant information that helps you to diagnose failures in the field.
This is a bit of handwaving as its too long for a comment. But how about templating and then a factory service?
This then doesn't change and you bind everything up on application start and after loading plugins.
interface IReportable
{
void Report(object param);
}
interface IKernel
{
T Get<T>();
}
class EventSource2 : EventSource
{
private IKernel _factory;
public EventSource2(IKernel factory)
{
_factory = factory;
}
public void Report<TReportable>(object param = null) where TReportable : IReportable
{
var reportable = _factory.Get<TReportable>();
reportable.Report(param);
//... Do what you want to do with EventSource
}
}
Group Events logically into a different smaller provider (EventSource classes) and not into 1 large file.
This has the advantage that you can enable the Events only for providers that you care in special cases.
Don't think of the EventSource as a listing of every possible log event you could possibly perform in your application. Remember there are ways to filter your events by using Keywords and Verbosity/event levels. You can even drill down further and use OpCodes and Tasks. Version 1.1 of the SLAB supports ActivityID and RelatedActivityID. Version 2.0 (https://slab.codeplex.com/wikipage?title=SLAB2.0ReleaseNotes&version=2) released earlier this week now supports process and thread id.
To give you an example, I have a very small EventSource derived class and have methods for StartLog, LogStatus, StopLogging, LogError, LogDebug and CreateDump with the first three using the same event level but different event ids due to differences in formatting and the remaining ones use different event levels so I don't debug or create dumps unless I dynamically enable it with a configuration file setting. The point is I can use the same methods from an asp.net site as well as class libraries or console apps. Don't forget this only defines the logging events. You still have to have a sink subscribe to the event, giving you more possibilities. You could have debug messages go to a file and error messages go to a database and/or email. The possibilities are endless.
One last thing. I thought I painted myself into a corner when I did my testing and found multiple assemblies were logging to the same file because they were using the same event methods (and therefore the same event id, keyword, event level, etc). I modified my code to pass the calling assembly name which is now used om the filter process when determining if a log message should be written (from the config file setting) and where (to a log file based on the assembly name). Hope this helps!