I am writing a websocket test application that will have a GUI to send various commands over the websocket. Rather than pack all the control code (message construction, formatting, control) into the callbacks for various controls, I am considering having each GUI element callback (e.g., onClick) send an event to a delegate that can handle it. That way the GUI would be separate from any control code. Is that a 'sane' design, or is there another 'best practice' to separate the two parts.
An example would be a TV Tuner control -- the user can enter a channel number via textbox, which will have no effect until they click the 'Tune' button. The onClick method could retrieve the channel number from the textbox, and send a doTune(channel) event to the delegate to make it happen.
Thoughts/advice welcome.
Thank you,
bp
This is indeed a sane design. I personally won't go for an event call, just a regular call to a static 'SocketCommands' class will do.
That is indeed a very sensible design - what you're doing is promoting a good seperation of concerns between the presentation layer (UI) and the business layer (transaction scripts, domain services etc).
So to answer your question, yes, it is a sane design :)
With regards to thoughts/advice, that would be a topic for programmers.stackexchange.com rather than here..
Related
I'm developing a multi tenant n-tier web application using asp.net Mvc 5.
In my service layer I am defining custom events for every important action and raising these events once these actions are executed. For example
Public event EventHandler EntityCreated;
Public void Create(Entity item) {
Save(item);
......
EntityCreated(this, item);
}
I intend on hooking up business rules and notifications to these events. The main reason I want to use events is decoupling of the logic and easy plug-ability of more events handlers without modifying my service layer.
Question:
Does it make sense using events and delegates in asp.net?
Most examples I find online are for win forms or wpf. I get the advantage when it comes to multithreaded applications. Also the events are defined once per form and are active for the lifetime of the form.
But in my case the events will be per http request. So is it an overhead defining these events?
As others pointed out that pub/sub or event bus is one solution. Another solution is something like what you are trying to do here but make it more formal.
Let's take a specific example of creating a customer. You want to send a welcome email when a new customer is created in the application. The domain should only be concerned with creating the customer and saving it in the db and not all the other details such as sending emails. So you add a CustomerCreated event. These types of events are called Domain Event as opposed to user interface events such as button click etc.
When the CustomerCreated event is raised, it should be handled somewhere in the code so that it can do the needful. You can use an EventHandlerService as you mentioned (but this can soon becomes concerned with too many events) or use the pattern that Udi Dahan talks about. I have successfully used Udi's method with many DI Containers and the beauty of the pattern is that your classes remain SRP compliant. You just have to implement a particular interface and registration code at the application bootstrap time using reflection.
If you need further help with this topic, let me know and I can share with you the code snippets to make it work.
I have implemented Udi Dahan's implementation as pointed out by #Imran but with a few changes.
My Events are being raised in a Service Layer and for that using a Static Class dint seem right. Also have added support for async/await.
Also going down the Events & Delegates path did work out but it just felt like an overhead to register the events per request.
I have blogged my solution here http://www.teknorix.com/event-driven-programming-in-asp-net
I've come into an issue that must be quite common, but with little insight around the world of Google.
You see, my project has 3 parts that I use:
CommunicationClass.cs (Asynchronous Socket Class)
Form1.Designer.cs (Containing the objects of Form1)
Form1.cs (Main constructor and contains event handlers for objects)
Pretty basic setup.
However, I don't know where I put my communication class instance. The communication class sends/receives messages. So, my instance of ComClass in Form1 would use its void Send() in the event handler for the enter key being pressed (while in a textBox).
That works fine. What doesn't work fine is when the ComClass RECEIVES a message. It can't use the non-static voids of PrintMessage() in Form1.cs, and PrintMessage can't be a static void because richTextBox1, where the messages are shown, is non-static.
I'm wondering if another component of C# will help me access these and overcome my problem, but I'm too new to C# to know. I want to keep using the layout I have rather than switch to one like an example TCP chat client, where the form is created outside of Program.cs.
In C#, the standard paradigm for stuff like this is to use events. This ties in with the idea of the Observer Pattern in software design.
You are already using that for handling the key-press. The "trick" is to implement an event on your CommClass that the Form instance can subscribe to, in order to receive notification of incoming data.
The usual .NET Forms implementation is usually a kind of "poor man's MVC", in which the Form class winds up acting as controller and view all at the same time. Of course, doing so negates the main benefit of an MVC design, which is that the view is completely independent of the controller.
But you could (after learning more about the MVC design pattern) create a third "controller" class that ties together the view (your Form) and the model (your CommClass where the actual meat of the work is implemented).
If you want to go really cheesy, you could just pass your Form instance directly to the CommClass and have some special method that the CommClass knows to call when it receives data. But that's just doubling-down on the failure to separate concerns between your class, tying them even more closely. Maybe okay for a quick-and-dirty proof of concept, but that's no way to write code that you have any interest in reusing some time in the future.
I have a flow chart that I'm implementing and it has 4 or 5 paths through it depending on user input and the results of some processing. Naturally, I don't want all this logic this in my Windows form, I just want to call a method on the class in the form. Is it bad design to have my business logic class reference System.Windows.Forms and show dialogs and MessageBoxes to get the input it needs to process and return a result?
Yes, this is bad design. Your class should offer a mean to communicate with the form and get data back. Just create events and let the Form subscribe to them, getting the information to create the dialogs from a custom EventArgs class. After it gets the input, just push the same class back with the additional information via a second event.
This should resemble the MVP pattern.
Yes, this is a bad idea. You are effectively coupling your business logic very tightly with the presentation. You (probably won't) be able to re-use business logic easily under other circumstances, and you won't be able to replace the UI without touching the business logic.
You need to have the UI and business logic layers communicate, and let the UI layer handle, well, the UI.
I think it's bad design. When you separate components of your application, a rule of thumb is to keep them separate enough so that you can run them on different computers.
Yes. Because it means your busienss object is simply not a business obejct.
Use a MVVM pattern and put the logic into the viewmodel.
It's bad design, because you may need to run your business logic in a situation with a different UI or no UI at all (such as in a server, or a batch process). That's what separation of business logic and UI is all about. If possible, it's better to get all the necessary user input up-front in the UI class before handing things off to the business logic. However, if it's necessary to have the business logic prompt for more information, then it's better for the business logic API to take a callback method delegate, which it can call to request the further input. Then the UI layer can decide how best to request from the user.
I have written a piece of software that supports plugin architecture. On the main GUI is a TextBox that I use to update the user with the status of the processes.
When I load a plugin, is it bad practice to pass a reference for that Textbox through to the plugin so that it can update it from within. Is this too highly coupled? Would it better practise with events?
Thanks.
I would suggest that you create an interface for communications between the plugin and its host. That would have an UpdateStatus method, and the implementation would update the textbox.
If you really only have one thing to do (updating the status) then you could use a simple delegate... but it seems likely that you may need more operations over time.
I've been looking in to the Composite Application Library, and it's great, but I'm having trouble deciding when to use the EventAggregator... or rather - when NOT to use it.
Looking at the StockTraderRI example, I'm even more confused. They are using the EventAggregator in some cases, and "classic" events in other cases (in for example the IAccountPositionService interface).
I've already decided to use it for communication with a heavy work task, that should run on a background thread. In this case the EventAggregator offers marshalling of threads behind the scenes, so I don't have to worry much about that. Besides that I like the decoupling this approach offers.
So my question is: When I've started using the EventAggregator in my application, why not use it for all custom events?
This is a good question. In Composite WPF (Prism) there are 3 possible ways to communicate between parts of your app. One way is to use Commanding, which is used only to pass UI-triggered actions down the road to the actual code implementing that action. Another way is to use Shared Services, where multiple parts hold a reference to the same Service (Singleton) and they handle various events on that service in the classical way. For disconnected and asynchronous communication, as you already stated, the best way is to use the Event Aggregator (which follows closely Martin Fowler's pattern).
Now, when to and not to use it:
Use it when you need to communicate between modules. (for example, a Task module needs to be notified when a Task is created by any other module).
Use it when you have multiple possible receivers or sources of the same event. For example, you have a list of objects and you want to refresh it whenever an object of that type is saved or created. Instead of holding references to all open edit/create screens, you just subscribe to this specific event.
Don't use it when you only have to subscribe to normal events in the Model View Presenter area. For example, if your presenter listens to changes in the Model (for example the Model implements INotifyPropertyChanged) and your Presenter needs to react on such changes, it's better that your Presenter handles directly the PropertyChanged event of the Model instead of diverting such events through the Event Aggregator. So, if both the sender and receiver are in the same unit, there's no need to "broadcast" such events to the whole application.
I hope this answers your question.