I am developing my first bot using the MS Bot Framework and although I understand the basics, I am a bit clueless as to how to organize my code. For eg. I am planning to have
notifier
welcome prompt
very basic help response
I am using the Core template in Visual Studio and it comes with a Bots folder which has classes ending with Bot. Looking at some samples, it seemed to me that the bot handling logic needs to sit here. So, I decided to have 3 classes, all extending from ActivityHandler each doing one of the above tasks. Say I have 3 classes,
public class MyNotifierBot: ActivityHandler
{
// Constructor and overrides
}
public class WelcomeBot: ActivityHandler
{
// Constructor and overrides
}
public class ResponseBot: ActivityHandler
{
// Constructor and overrides
}
The first problem is that if I register all 3 classes as services.AddTransient<IBot, MyNotifierBot>() etc, I can only get the last registered bot in my controllers. Sure I can get a collection of the implementations in the controller and figure out the right one to use using reflection, it just feels wrong.
My question is, if this pattern is wrong and I should have a single class which extends from ActivityHandler and write my logic in seperate services. Or is there a better approach to this.
Edit: After thinking about this I am now wondering the existance of the Bots folder in the first place. If I am not meant to create multiple ActivityHandler subclasses for doing different things then what exactly is this structure for?
ActivityHandler implements IBot, so it can be thought of like a bot. Having multiple activity handlers would be like having multiple bots. Activity handlers are already designed to route different activity types to different code, so if routing is your concern then you only need one activity handler.
I presume your notifier is for proactive messaging. Rather than having a separate activity handler for it, what normally works is to have a separate endpoint, which is usually api/notify (as opposed to api/messages). You can still have a separate activity handler for that if you want, or not even use an activity handler for that case (like in the sample). Note that different channels may have special considerations for proactive messages, but that's outside the scope of your question.
Welcome messages are very easy with activity handlers. You can just use OnMembersAddedAsync in your one activity handler, and there's no need for a whole separate activity handler. Welcome messages are also channel-specific because they rely on conversation update activities, and not every channel has a well-defined way to know when a conversation starts before the user says anything. Here's a sample for if you're using Web Chat.
If you want multiple implementations of the same interface in your dependency injection then you'll need to identify them by the implementation rather than the interface, but keep in mind that you don't need to put them in dependency injection at all.
Related
When implementing, let's say, a service to handle email messages, should I put the code of the RabbitMQ subscriber to the separate process from the main program? Or should it be in the common codebase?
Are there any drawbacks of putting them together?
We develop a microservice-oriented application with .Net Core 3.
We have to use a messaging bus to let many services react to some events published by another services. We use RabbitMQ.
We've already tried to create two separate applications communicating over HTTP. One listens for the new messages and triggers webhooks on another. The second one does all the stuff.
I expect some advice on how to organize the code. Would the common codebase be easier to maintain in future? Is the network requests timing overhead really important?
We wrote a wrapper that we use in our microservices and applications to abstract away the implementation details of RabbitMQ. One of the key things it handles is tracking Subscribed Events and their associated Handlers. So, the application can define/inject a handler class and it automatically gets called whenever a matching message arrives.
For us, we treat the actual messaging implementation as a cross-cutting concern so we package and use it (Nuget package) just like any other, it's a completely separate project from everything else. It's actually a public Nuget package too, feel free to play with it you want (it's not well documented though).
Here's a quick example of how ours works so you can see the level of integration:
In Startup.cs
using MeshIntegrationBus.RabbitMQ;
public class Startup
{
public RabbitMqConfig GetRabbitMqConfig()
{
ExchangeType exchangeType = (ExchangeType)Enum.Parse(typeof(ExchangeType),
Configuration["RabbitMQ:IntegrationEventExchangeType"], true);
var rabbitMqConfig = new RabbitMqConfig
{
ExchangeName = Configuration["RabbitMQ:IntegrationEventExchangeName"],
ExchangeType = exchangeType,
HostName = Configuration["RabbitMQ:HostName"],
VirtualHost = Configuration["RabbitMQ:VirtualHost"],
UserName = Configuration["RabbitMQ:UserName"],
Password = Configuration["RabbitMQ:Password"],
ClientProviderName = Configuration["RabbitMQ:ClientProviderName"],
Port = Convert.ToInt32(Configuration["RabbitMQ:Port"])
}
return rabbitMqConfig
}
public void ConfigureServices(IServicecollection services)
{
services.Add.... // All your stuff
// If this service will also publish events, add that Service as well (scoped!)
services.AddScoped<IMeshEventPublisher, RabbitMqEventPublisher>(s =>
new RabbitMqEventPublisher(rabbitConfig));
// Since this service will be a singleton, wait until the end to actually add it
// to the servicecollection - otherwise BuildServiceProvider would duplicate it
RabbitMqListener rabbitMqListener = new RabbitMqListener(rabbitConfig,
services.BuildServiceProvider());
var nodeEventSubs = Configuration.GetSection(
$"RabbitMQ:SubscribedEventIds:ServiceA").Get<string[]>();
// Attach our list of events to a handler
// Handler must implement IMeshEventProcessor and does have
// access to the IServiceProvider so can use DI
rabbitMqListener.Subscribe<ServiceAEventProcessor>(nodeEventSubs);
services.AddSingleton(rabbitMqListener);
}
}
That's really all there is to it. You can add multiple handlers per subscribed event and/or reuse the same handler(s) for multiple events. It's proven pretty flexible and reliable - we've been using it for a couple years - and it's easy enough to change/update as needed and let individual services pull the changes when they want/need to.
I recently designed and developed a software that communicating with other components via AMQP implemented by RabbitMQ.
As my matter of design I wrote a reusable service class which is being called by other service classes. That is happening in service layer or you can say business layer.
But as you asked if there are several applications and all of them need to implement RabbitMQ client I would create a library/module/package that can be easily imported to applications and configurable.
I suppose you are talking about situation when there is one publisher and many different applications (microservices) that process messages from a message queue. It's worth noticing that we are talking about applications that have different business purpose but not about many instances of the same application.
I would recommend to follow the next suggestions:
One queue is always contains messages of one type. In other words when you deserialize json you should know exactly what class it should be.
In most cases one microservice == one VS solution == one repository.
You would like to share the class to deserialize json between microservices. For this purpose you can create a nuget package with it's interface. This nuget should contains DTO classes without any business logic. Usually we call this nuget "*.Contracts".
The main feature of microservice architecture is simplicity of system modification and those advices should help you to keep it simple from one side and prevent a hell of having absolutely unstructured system (3rd advice) from another side.
One more notice about a case when there is one publisher and one consumer. This can happen when you want to process data in background (for instance, some process can take a lot of time but your website should answer to client immediately. Customer in this case usually is a webjob). For this cases we are using one solution (repository) with both publisher and consumer for simplicity development.
Do you think it might be reasonable to replace my service layer or service classes with MediatR? For example, my service classes look like this:
public interface IEntityService<TEntityDto> where TEntityDto : class, IDto
{
Task<TEntityDto> CreateAsync(TEntityDto entityDto);
Task<bool> DeleteAsync(int id);
Task<IEnumerable<TEntityDto>> GetAllAsync(SieveModel sieveModel);
Task<TEntityDto> GetByIdAsync(int id);
Task<TEntityDto> UpdateAsync(int id, TEntityDto entityDto);
}
I want to achieve some sort of modular design so other dynamically loaded modules
or plugins can write their own notification or command handlers for my main core application.
Currently, my application is not event-driven at all and there's no easy way for my dynamically loaded plugins to communicate.
I can either incorporate MediatR in my controllers removing service layer completely or use it with my service layer just publishing notifications so my plugins can handle them.
Currently, my logic is mostly CRUD but there's a lot of custom logic going on before creating, updating, deleting.
Possible replacement of my service would look like:
public class CommandHandler : IRequestHandler<CreateCommand, Response>, IRequestHandler<UpdateCommand, Response>, IRequestHandler<DeleteCommand, bool>
{
private readonly DbContext _dbContext;
public CommandHandler(DbContext dbContext)
{
_dbContext = dbContext;
}
public Task<Response> Handle(CreateCommand request, CancellationToken cancellationToken)
{
//...
}
public Task<Response> Handle(UpdateCommand request, CancellationToken cancellationToken)
{
//...
}
public Task<bool> Handle(DeleteCommand request, CancellationToken cancellationToken)
{
///...
}
}
Would it be something wrong to do?
Basically, I'm struggling what to choose for my logic flow:
Controller -> Service -> MediatR -> Notification handlers -> Repository
Controller -> MediatR -> Command handlers -> Repository
It seems like with MediatR I can't have a single model for Create, Update and Delete, so one way to re-use it I'd need to derive requests like:
public CreateRequest : MyDto, IRequest<MyDto> {}
public UpdateRequest : MyDto, IRequest<MyDto> {}
or embed it in my command like:
public CreateRequest : IRequest<MyDto>
{
MyDto MyDto { get; set; }
}
One advantage of MediatR is the ability to plug logic in and plug it out easily which seems like a nice fit for modular architecture but still, I'm a bit confused how to shape my architecture with it.
Update: I'm preserving the answer, but my position on this has changed somewhat as indicated in this blog post.
If you have a class, let's say an API controller, and it depends on
IRequestHandler<CreateCommand, Response>
What is the benefit of changing your class so that it depends on IMediator,
and instead of calling
return requestHandler.HandleRequest(request);
it calls
return mediator.Send(request);
The result is that instead of injecting the dependency we need, we inject a service locator which in turn resolves the dependency we need.
Quoting Mark Seeman's article,
In short, the problem with Service Locator is that it hides a class' dependencies, causing run-time errors instead of compile-time errors, as well as making the code more difficult to maintain because it becomes unclear when you would be introducing a breaking change.
It's not exactly the same as
var commandHandler = serviceLocator.Resolve<IRequestHandler<CreateCommand, Response>>();
return commandHandler.Handle(request);
because the mediator is limited to resolving command and query handlers, but it's close. It's still a single interface that provides access to lots of other ones.
It makes code harder to navigate
After we introduce IMediator, our class still indirectly depends on IRequestHandler<CreateCommand, Response>. The difference is that now we can't tell by looking at it. We can't navigate from the interface to its implementations. We might reason that we can still follow the dependencies if we know what to look for - that is, if we know the conventions of command handler interface names. But that's not nearly as helpful as a class actually declaring what it depends on.
Sure, we get the benefit of having interfaces wired up to concrete implementations without writing the code, but the savings are trivial and we'll likely lose whatever time we save because of the added (if minor) difficulty of navigating the code. And there are libraries which will register those dependencies for us anyway while still allowing us to inject abstraction we actually depend on.
It's a weird, skewed way of depending on abstractions
It's been suggested that using a mediator assists with implementing the decorator pattern. But again, we already gain that ability by depending on an abstraction. We can use one implementation of an interface or another that adds a decorator. The point of depending on abstractions is that we can change such implementation details without changing the abstraction.
To elaborate: The point of depending on ISomethingSpecific is that we can change or replace the implementation without modifying the classes that depend on it. But if we say, "I want to change the implementation of ISomethingSpecific (by adding a decorator), so to accomplish that I'm going to change the classes that depend on ISomethingSpecific, which were working just fine, and make them depend on some generic, all-purpose interface", then something has gone wrong. There are numerous other ways to add decorators without modifying parts of our code that don't need to change.
Yes, using IMediator promotes loose coupling. But we already accomplished that by using well-defined abstractions. Adding layer upon layer of indirection doesn't multiply that benefit. If you've got enough abstraction that it's easy to write unit tests, you've got enough.
Vague dependencies make it easier to violate the Single Responsibility Principle
Suppose you have a class for placing orders, and it depends on ICommandHandler<PlaceOrderCommand>. What happens if someone tries to sneak in something that doesn't belong there, like a command to update user data? They'll have to add a new dependency, ICommandHandler<ChangeUserAddressCommand>. What happens if they want to keep piling more stuff into that class, violating the SRP? They'll have to keep adding more dependencies. That doesn't prevent them from doing it, but at least it shines a light on what's happening.
On the other hand, what if you can add all sorts of random stuff into a class without adding more dependencies? The class depends on an abstraction that can do anything. It can place orders, update addresses, request sales history, whatever, and all without adding a single new dependency. That's the same problem you get if you inject an IoC container into a class where it doesn't belong. It's a single class or interface that can be used to request all sorts of dependencies. It's a service locator.
IMediator doesn't cause SRP violations, and its absence won't prevent them. But explicit, specific dependencies guide us away from such violations.
The Mediator Pattern
Curiously, using MediatR doesn't usually have anything to do with the mediator
pattern. The mediator pattern promotes loose coupling by having objects interact with a mediator rather than directly with each other. If we're already depending on an abstraction like an ICommandHandler then the tight coupling that the mediator pattern prevents doesn't exist in the first place.
The mediator pattern also encapsulates complex operations so that they appear simpler from the outside.
return mediator.Send(request);
is not simpler than
return requestHandler.HandleRequest(request);
The complexity of the two interactions is identical. Nothing is "mediated." Imagine that you're about to swipe your credit card at the grocery store, and then someone offers to simplify your complex interaction by leading you to another register where you do exactly the same thing.
What about CQRS?
A mediator is neutral when it comes to CQRS (unless we have two separate mediators, like ICommandMediator and IQueryMediator.) It seems counterproductive to separate our command handlers from our query handlers and then inject a single interface which in effect brings them back together and exposes all of our commands and queries in one place. At the very least it's hard to say that it helps us to keep them separate.
IMediator is used to invoke command and query handlers, but it has nothing to do with the extent to which they are segregated. If they were segregated before we added a mediator, they still are. If our query handler does something it shouldn't, the mediator will still happily invoke it.
I hope it doesn't sound like a mediator ran over my dog. But it's certainly not a silver bullet that sprinkles CQRS on our code or even necessarily improves our architecture.
We should ask, what are the benefits? What undesirable consequences could it have? Do I need that tool, or can I obtain the benefits I want without those consequences?
What I am asserting is that once we're already depending on abstractions, further steps to "hide" a class's dependencies usually add no value. They make it harder to read and understand, and erode our ability to detect and prevent other code smells.
Partly this was answered here: MediatR when and why I should use it? vs 2017 webapi
The biggest benefit of using MediaR(or MicroBus, or any other mediator implementation) is isolating and/or segregating your logic (one of the reasons its popular way to use CQRS) and a good foundation for implementing decorator pattern (so something like ASP.NET Core MVC filters). From MediatR 3.0 there's an inbuilt support for this (see Behaviours) (instead of using IoC decorators)
You can use the decorator pattern with services (classes like FooService) too. And you can use CQRS with services too (FooReadService, FooWriteService)
Other than that it's opinion-based, and use what you want to achieve your goal. The end result shouldn't make any difference except for code maintenance.
Additional reading:
Baking Round Shaped Apps with MediatR
(which compares custom mediator implementation with the one MediatR provides and porting process)
Is it good to handle multiple requests in a single handler?
I'm developing a multi tenant n-tier web application using asp.net Mvc 5.
In my service layer I am defining custom events for every important action and raising these events once these actions are executed. For example
Public event EventHandler EntityCreated;
Public void Create(Entity item) {
Save(item);
......
EntityCreated(this, item);
}
I intend on hooking up business rules and notifications to these events. The main reason I want to use events is decoupling of the logic and easy plug-ability of more events handlers without modifying my service layer.
Question:
Does it make sense using events and delegates in asp.net?
Most examples I find online are for win forms or wpf. I get the advantage when it comes to multithreaded applications. Also the events are defined once per form and are active for the lifetime of the form.
But in my case the events will be per http request. So is it an overhead defining these events?
As others pointed out that pub/sub or event bus is one solution. Another solution is something like what you are trying to do here but make it more formal.
Let's take a specific example of creating a customer. You want to send a welcome email when a new customer is created in the application. The domain should only be concerned with creating the customer and saving it in the db and not all the other details such as sending emails. So you add a CustomerCreated event. These types of events are called Domain Event as opposed to user interface events such as button click etc.
When the CustomerCreated event is raised, it should be handled somewhere in the code so that it can do the needful. You can use an EventHandlerService as you mentioned (but this can soon becomes concerned with too many events) or use the pattern that Udi Dahan talks about. I have successfully used Udi's method with many DI Containers and the beauty of the pattern is that your classes remain SRP compliant. You just have to implement a particular interface and registration code at the application bootstrap time using reflection.
If you need further help with this topic, let me know and I can share with you the code snippets to make it work.
I have implemented Udi Dahan's implementation as pointed out by #Imran but with a few changes.
My Events are being raised in a Service Layer and for that using a Static Class dint seem right. Also have added support for async/await.
Also going down the Events & Delegates path did work out but it just felt like an overhead to register the events per request.
I have blogged my solution here http://www.teknorix.com/event-driven-programming-in-asp-net
The Semantic Logging Application Block (SLAB) is very appealing to me, and I wish to use it in a large, composite application I am writing. To use it, one writes a class derived from 'EventSource', and includes one method in the class for each event they want to log as a typed event, vs. a simple string.
An application such as mine could have hundreds of such events. I could have an 'EventSource' based class with just one event, "SomethingHappened", and log everything through that, at the one extreme end of the effort and accuracy spectrum, and I could have one event for every operation I perform.
It strikes me as a good idea to have EventSource derivatives for different functional areas. The app has little to know business logic itself; that is all provided by MEF plugin modules, so I could have event sources for bootsrapping, security, config changes etc. and any plugin module can define an event source for whatever events it wants to log.
Is this a good strategy, or are many EventSource derived loggers an undesirable app feature?
From your question
... I wish to use it in a large, composite application I am writing...
I can deduce that large is meant in the context of a single developer. In that case you can derive from EventSource and add all events you possibly could want into that class.
It does not make much sense to create an extra EventSource derived class for every part of your composite application since it would pollute the eventsource registration database where already 2K of providers are registered. Besides that it would make it hard to enable logging for your application if you need to remember 20 guids you need to enable to follow your application logic through several layers.
A compromise would be to define in your EventSource class some generic event like
public void WriteViolation(string Subsystem, string Message, string Context)
where you have in your components a logger class for each component
public static class NetworkLogger
{
public static void Violation(string message)
{
GenericSource.Instance.Violation("Network", message, NetworkContext.Current);
}
}
public static class DatabaseLogger
{
public static void Violation(string message)
{
GenericSource.Instance.Violation("Database", message, DBContext.Current);
}
}
That way you can keep the loggers component specific and you can add e.g. automatically contextual information to the generic event when necesssary.
Another approach is to use in your application tracing where your trace method enter/leave, info, warning, error and your EventSource derived class knows only these events. When you add for every trace entry the type name + method name you can filter by namespaces and group by classes in WPA to see what you were doing. An example is shown in Semantic Tracing For .NET 4.0.
For a large application you can check out on your machine the file
C:\Windows\Microsoft.NET\Framework\v4.0.30319\CLR-ETW.man
You can open it with ecmangen.exe from Windows SDK to get a nice GUI to see how the events are structured. .NET has only two Event Providers defined. The many events are grouped via keywords to enable specific aspects of .NET e.g. GC, Loader, Exceptions, ....
This is important since you can pass while you enable a provider specific keywords to it to enable only some events of a large provider.
You can also check out Microsoft.Windows.ApplicationServer.Applications.45.man to find out how the Workflow guys think about ETW events. That should help to find your own way. It is not so much about how exactly you structure your events since the real test is finding production bugs at customer sites. The probability is high that you need to take several iterations until you have found the right balance to log/trace relevant information that helps you to diagnose failures in the field.
This is a bit of handwaving as its too long for a comment. But how about templating and then a factory service?
This then doesn't change and you bind everything up on application start and after loading plugins.
interface IReportable
{
void Report(object param);
}
interface IKernel
{
T Get<T>();
}
class EventSource2 : EventSource
{
private IKernel _factory;
public EventSource2(IKernel factory)
{
_factory = factory;
}
public void Report<TReportable>(object param = null) where TReportable : IReportable
{
var reportable = _factory.Get<TReportable>();
reportable.Report(param);
//... Do what you want to do with EventSource
}
}
Group Events logically into a different smaller provider (EventSource classes) and not into 1 large file.
This has the advantage that you can enable the Events only for providers that you care in special cases.
Don't think of the EventSource as a listing of every possible log event you could possibly perform in your application. Remember there are ways to filter your events by using Keywords and Verbosity/event levels. You can even drill down further and use OpCodes and Tasks. Version 1.1 of the SLAB supports ActivityID and RelatedActivityID. Version 2.0 (https://slab.codeplex.com/wikipage?title=SLAB2.0ReleaseNotes&version=2) released earlier this week now supports process and thread id.
To give you an example, I have a very small EventSource derived class and have methods for StartLog, LogStatus, StopLogging, LogError, LogDebug and CreateDump with the first three using the same event level but different event ids due to differences in formatting and the remaining ones use different event levels so I don't debug or create dumps unless I dynamically enable it with a configuration file setting. The point is I can use the same methods from an asp.net site as well as class libraries or console apps. Don't forget this only defines the logging events. You still have to have a sink subscribe to the event, giving you more possibilities. You could have debug messages go to a file and error messages go to a database and/or email. The possibilities are endless.
One last thing. I thought I painted myself into a corner when I did my testing and found multiple assemblies were logging to the same file because they were using the same event methods (and therefore the same event id, keyword, event level, etc). I modified my code to pass the calling assembly name which is now used om the filter process when determining if a log message should be written (from the config file setting) and where (to a log file based on the assembly name). Hope this helps!
I'm new to Java, I'm porting over our Windows Phone 7 library to run on Android. Due to syntax similarities this has been very simple so far. Our library is basically an abstracted http message queue that provides data persistence and integrity on mobile platforms. It only provides asynchronous methods which is a design choice. On WP7 I make use of delegates to call the user supplied callback when an async message has been processed and the servers response received.
To achieve the same thing on Android I've found two ways so far - A simple Java listener interface that contains OnSuccess and OnFailure methods that the user must implement, or using the Android handler class which provides a message queue between threads (http://developer.android.com/reference/android/os/Handler.html).
I've gone with the Handler at this stage as if I'm honest it is the most similar to a C# delegate. It also seems like less work for a user of our library to implement. Example of some user code to make use of our library:
connection.GetMessage("http://somerestservice.com", GetCallback);
Handler GetCallback = new Handler() {
public void handleMessage(Message message){
CustomMessageClass customMessage = (CustomMessageClass)message.obj;
if(customMessage.status == Status.Delivered) {
// Process message here,
// it contains various information about the transaction
// as well as a tag that can contain a user object etc.
// It also contains the servers response as a string and as a byte array.
}
}
};
Using this the user can create as many different handlers as they'd like, called whatever they'd like, and pass them in as method parameters. Very similar to a delegate...
The reason I'm wondering if I should move to a listener interface is because the more exposure I gain to Java the more it seems that's just how it's done and it's how third parties using our library would expect it to be done.
It's essentially the same process, except each time you wanted to do something different with the server response i.e. You might be fetching different types of data from different endpoints, you're going to have to create a custom class that implements our interface each time, as well as implementing any methods our interface has. Or of course you could have a single monolithic class that all server responses were funneled in to but have fun trying to figure out what to do with each individual response...
I may be a bit biased due to coming from C# but a listener seems a bit convoluted and I like the handler implementation better, do any Java developers have any thoughts/advice? It would be much appreciated.
Cheers!
The benefit of using the interface approach is loose coupling. This way, any class that implements your interface shouldn't be aware of (or be affected by) any thread management being done elsewhere and can handle the result object as appropriate within its scope.
BTW, I'm a big fan of AsyncTask. Have you tried using?
I don't think what you have there compiles.. you need to define the handler implementation before you use it?
But to the substance of your question, if you really do want a different handler implementation for each response, than the api you have seems fine.
I would use the listener pattern if all messages are handled in the same way, or the different handling only depends on the content in the message which could not be determined when making the getMessage call.
As an aside, typically in Java function and variable names begin with a lower case. Only class names begin with an upper case.