I have been learning about modular monolith project structure in this article: https://codewithmukesh.com/blog/modular-architecture-in-aspnet-core
Most of it makes sense to me but something I don't quite get is:
Cross Module communication can happen only via Interfaces/events/in-memory bus. Cross Module DB Writes should be kept minimal or avoided completely.
How exactly does that cross-module communication look?
Let's say I have 3 modules:
Product
User
Security
My security module registers an endpoint for DisableUser. It's this endpoint's job to update a User and every Product associated with the user with a disabled status.
How does the Security module call User & Product update status method in a unit of work?
My understanding is that this pattern is intended to make it easier to extract a module to a microservice at a later date so I guess having it as a task of some sort makes it easier to change to a message broker but I am just not sure how this is supposed to look.
My example is obviously contrived, my main point is how do modules communicate together when read/writes are involved?
Theory
There are lot of misunderstandings about terminology in such questions, so let's mark 2 completely different architectures - monolith architecture and microservices architecture. So one architecture that stands between these both is a modular monolith architecture.
Monolith architecture mostly has a huge problem - high coupling and low cohesion because you have no strong methods to avoid it. So programmers decide to think about new ways of building different architectures to make really hard to fall down in high coupling low cohesion problem.
Microservices architecture was a solution (despite other problems it solve too). Main point in microservices architecture is all about separation services from each other to avoid high coupling (because it is not so easy to setup communication between services as in monolith architecture).
But programmers can't move from one architecture to completely different in "one click", so one (but not only one) way to build microservices architecture from monolith architecture is to make modular monolith first (just solve high coupling low cohesion problem but in monolith) and then extract modules to microservices easily.
Communication
To made coupling low we should focus on communication between services.
Lets work with sample you put in your question.
Imagine we have this monolith architecture:
We definitely see high coupling problem here. Let's say we want to build it more modular. To make that, we need to add something between modules to separate them from each other, also we want modules to communicate, so the only thing we must to add is a bus.
Something like that:
P.S. Is could be completely separated not im-memory bus (like kafka or rabbitmq)
So your main question was about how to make communication between modules, there are few ways to do that.
Communication via interfaces (synchronous way)
Modules could call each other directly (synchronously) through interfaces. Interface is an abstraction, so we don't know what stands behind that interface. It could be mock or real working module. It means that one module doesn't know nothing about other modules, it knows only about some interfaces it communicate with.
public interface ISecurityModule { }
public interface IUserModule { }
public interface IProfileModule { }
public class SecurityModule : ISecurityModule
{
public SecurityModule(IUserModule userModule) { } // Does not know about UserModule class directly
}
public class UserModule : IUserModule
{
public UserModule(IProfileModule profileModule) { } // Does not know about ProfileModule class directly
}
public class ProfileModule : IProfileModule
{
public ProfileModule(ISecurityModule securityModule) { } // Does not know about SecurityModule class directly
}
You can communicate between interfaces through methods call with no doubt but this solution doesn't help well to solve high coupling problem.
Communication via bus (asynchronous way)
Bus is a better way to build communication between modules because it forces you use Events/Messages/Commands to make communication. You can't use methods call directly anymore.
To achieve that you should use some bus (separated or in-memory library). I recommend to check other questions (like this) to find proper way to build such communication for your architecture.
But be aware - using bus you make communication between modules asynchronous, so it forces you to rewrite inner module behaviour to support such communication way.
About your example with DisableUser endpoint. SecurityModule could just send command/event/message in bus that user was disabled in security module - so other services could handle this command/event/message and "disable" it using current module logic.
What's next
Next is a microservice architecture with completely separated services communicating through separated bus with separated databases too:
Example
Not long time ago I've done project completely in microservices architecture after course.
Check it here if you need good microservices architecture example.
Images were created using Excalidraw
First glance, I think one approach is to use Mediator events since the project already uses that. It would work well and keep everything separate.
To define Mediator event check this.
You define your events in shared core, for your example:
public class UserDisabled : INotification
{
public string UserId { get; set; }
}
From the User modules you will publish the event when the user get disabled
await mediator.Publish(new UserDisabled{UserId = "Your userId"});
And Finally declare event handlers in every modules that need to react to the event
public class UserDisabledHandler : INotificationHandler<UserDisabled>
{
public UserDisabledHandler()
{
//You can use depency injection here
}
public Task Handle(UserDisabled notification, CancellationToken cancellationToken)
{
throw new NotImplementedException();
}
}
However it is worth noting that this won't work if you want to switch to actual micro-services. I'm not very familiar with micro services, but I think you need some form of event bus and that's where micro-services becomes complicated.
There is information about that in this Microsoft book.
Related
Hi I am trying to create a project skeleton uses CQRS pattern and some external services. Below are the structure of the solution.
WebApi
Query Handlers
Command Handlers
Repository
ApiGateways ( here is the interfaces and implementation of microservice calls)
We want to keep controller as thin. So we are using query handlers and command handlers to handle respective operations.
However, we use a external microservices to get the data we are calling them from Query handlers.
All the http clinet construction and calls will be abstracted in them.The response will be converted to a view model and pass it back to Query handler.
We name it as a ApiGateways. But it is not composing from multiple services.
How do we call this part in our solution? Proxy or something? Any good example for thin controllers and microservice architecture
We name it as API Gateways. But it is not composed from multiple
services. How do we call this part in our solution? Proxy or
something? Any good example for thin controllers and microservice
architecture
Assumption:
From the image you attached, I see Command Handler and Query Handler are calling "external/micro-services". I guess by this "external/micro-services" you mean that you are calling another micro-service from your current micro-service Handler(Command and Query). These "external/micro-services" are part of your architecture and deployed on the same cluster and not some external system that just exposes a public API?
If this is correct I will try to answer based on this assumption.
API Gateway would probably be misleading in this case as the concept of API Gateway is something different then what you are trying to do here.
API Gateway per definition:
Quote from here:
An API Gateway is a server that is the single entry point into the
system. It is similar to the Facade pattern from object-oriented
design. The API Gateway encapsulates the internal system architecture
and provides an API that is tailored to each client. It might have
other responsibilities such as authentication, monitoring, load
balancing, caching, request shaping and management, and static
response handling.
What you actually are trying to do is to call from your Command or Query Handler from one of your micro-service A another micro-service B. This is an internal micro-service communication that should not be done through API Gateway as that would be the approach for outside calls. For example, with "outside calls" I mean frontend application API or public API calls that are trying to call your micro-services. In that case, you would use API Gateways.
A better name for this component would be something like "CrossMicroServiceGateway" or "InterMicroServiceGateway"; if you want to have it as the full CQRS way you could have it like a direct call to other Command or Query and then you could use something like "QueryGate" or "CommandGate" or similar.
Other suggestions:
WebApi
Query Handlers
Command Handlers
Repository
API Gateways ( here is the interfaces and implementation of
microservice calls)
This sounds reasonable except for the point of API Gateway which I described above. Of course, it is hard for me to tell based on the limited information that I have about your project. To give you a more precise suggestion here I would need to know whether you use DDD or not? How do you use CQRS and other information?
However, we use an external microservices to get the data we are
calling them from Query handlers. All the HTTP client construction and
calls will be abstracted in them. The response will be converted to a
view model and pass it back to Query handler.
You could extract all this code/logic that handles the cross micro-service communication over HTTP or other protocols, handling general responses and similar to some core library and include it into each of your micro-service as a package. In this way, you will reuse the solution for all your micro-service. You can extend that and add all core domain-agnostic things (like data access or repository base classes, wrappers around HTTP, unit-test infrastructure setup, and similar) to that or other shared libraries. This way your micro-services will only focus on the part of the Domain it is supposed to do.
I think CQRS is the right choice to keep the reading and writing operations decoupled.
The integration with third party systems (if it's the case), need some attention.
Do not call these services directly from your handlers, this could lead to various performance and/or maintainability issues.
You have to keep these integrations very well separated, because them are out of your domain. They may be subject to inefficiencies, changes or a number of problems out of your control.
One solution that I could recommend is a "Middleware" service.
In your application context this can be constituted by another service (always REST for example) that will have the task of 'talk' (only him) with external systems, acting as a single point of integration between your domain and the external environment. This can be realized from scratch or using a commercial/opens-source solution like (just as example) this.
This lead to many benefits, same of these:
A middleware is a unique mockable point during integration test of your application.
You can change the middleware implementation in the future without touch your handlers.
Of course, changing 3pty providers won't affect your domain services.
Middleware is the unique point dedicated to manage 3pty service interruptions.
Your services remain agnostic compared to the outside world.
Focus on these questions can be useful to design your integration middleware service:
Which types of 3pty data do they provide? Are they on time? This might help you figure out whether to introduce a cache system into your integration service.
Can 3pty be subject to frequent interruptions? Then you must ensure that your system must tolerate any disruption of external services. In other words, you must ensure a certain resilience of your services. There are many techniques to do that.
Do you really need to interrogate these 3pty services all the time? Maybe a more or less sophisticated cache system could speed up your services a lot.
Finally, it is also very important to understand if the need to have a microservices-oriented system is a real and immediate need.
Due to the fact these architectures are more expensive and complex then the classic ones, it might be reasonable to think about starting by building a monolith system and then moving towards a more segmented solution later.
Thinking (organizing) your system as many "bounded context" does not prevent you from creating a good monolith system and at the same time, it prepares you for a possible switch to microservices-oriented one.
As a summary advice, start by keeping things as separate as possible. Define a language to speak about your business model. These lead to you potentially change a lot when needs will come without to much effort during the inevitable evolution of your software. "Hexagonal" architecture is a good starting point to do that for both choises (Microservices vs Monolith).
Recently, Netflix posted a nice article about this architecture with a lot of ideas for a fresh start.
I will give my answer from DDD and the clean architecture perspective. Ideally, you application should have following layers.
Api (ideally very thin layer of Controllers).The controller will create queries and commands and push them on a common channel. (refer MediatR)
Application This will be your orchestration layer. It will contain definitions of queries and command and their handlers. For queries, you will directly interact form your infrastructure layer. For commands, you will interact with domain and then save them through repositories in infrastructure.
Domain Depends upon your business logic and complexity, this layer will contain all your business models.
Infrastructure It will contain mostly two types of objects Providers and Repositories. Providers should be used with queries and will return DAO. Repositories should be used where ever domain is involved, ideally commands in CQRS. Repositories should always receive and return only domain objects.
So after setting the base context about different layers on clean architecture, the answer to your original question is --> I would create third party interactions in the provider layer. For example, you need to connect with a user microservice, I will create a UserProvider in the provider folder in the infrastructure layer and consume it through a interface.
When implementing, let's say, a service to handle email messages, should I put the code of the RabbitMQ subscriber to the separate process from the main program? Or should it be in the common codebase?
Are there any drawbacks of putting them together?
We develop a microservice-oriented application with .Net Core 3.
We have to use a messaging bus to let many services react to some events published by another services. We use RabbitMQ.
We've already tried to create two separate applications communicating over HTTP. One listens for the new messages and triggers webhooks on another. The second one does all the stuff.
I expect some advice on how to organize the code. Would the common codebase be easier to maintain in future? Is the network requests timing overhead really important?
We wrote a wrapper that we use in our microservices and applications to abstract away the implementation details of RabbitMQ. One of the key things it handles is tracking Subscribed Events and their associated Handlers. So, the application can define/inject a handler class and it automatically gets called whenever a matching message arrives.
For us, we treat the actual messaging implementation as a cross-cutting concern so we package and use it (Nuget package) just like any other, it's a completely separate project from everything else. It's actually a public Nuget package too, feel free to play with it you want (it's not well documented though).
Here's a quick example of how ours works so you can see the level of integration:
In Startup.cs
using MeshIntegrationBus.RabbitMQ;
public class Startup
{
public RabbitMqConfig GetRabbitMqConfig()
{
ExchangeType exchangeType = (ExchangeType)Enum.Parse(typeof(ExchangeType),
Configuration["RabbitMQ:IntegrationEventExchangeType"], true);
var rabbitMqConfig = new RabbitMqConfig
{
ExchangeName = Configuration["RabbitMQ:IntegrationEventExchangeName"],
ExchangeType = exchangeType,
HostName = Configuration["RabbitMQ:HostName"],
VirtualHost = Configuration["RabbitMQ:VirtualHost"],
UserName = Configuration["RabbitMQ:UserName"],
Password = Configuration["RabbitMQ:Password"],
ClientProviderName = Configuration["RabbitMQ:ClientProviderName"],
Port = Convert.ToInt32(Configuration["RabbitMQ:Port"])
}
return rabbitMqConfig
}
public void ConfigureServices(IServicecollection services)
{
services.Add.... // All your stuff
// If this service will also publish events, add that Service as well (scoped!)
services.AddScoped<IMeshEventPublisher, RabbitMqEventPublisher>(s =>
new RabbitMqEventPublisher(rabbitConfig));
// Since this service will be a singleton, wait until the end to actually add it
// to the servicecollection - otherwise BuildServiceProvider would duplicate it
RabbitMqListener rabbitMqListener = new RabbitMqListener(rabbitConfig,
services.BuildServiceProvider());
var nodeEventSubs = Configuration.GetSection(
$"RabbitMQ:SubscribedEventIds:ServiceA").Get<string[]>();
// Attach our list of events to a handler
// Handler must implement IMeshEventProcessor and does have
// access to the IServiceProvider so can use DI
rabbitMqListener.Subscribe<ServiceAEventProcessor>(nodeEventSubs);
services.AddSingleton(rabbitMqListener);
}
}
That's really all there is to it. You can add multiple handlers per subscribed event and/or reuse the same handler(s) for multiple events. It's proven pretty flexible and reliable - we've been using it for a couple years - and it's easy enough to change/update as needed and let individual services pull the changes when they want/need to.
I recently designed and developed a software that communicating with other components via AMQP implemented by RabbitMQ.
As my matter of design I wrote a reusable service class which is being called by other service classes. That is happening in service layer or you can say business layer.
But as you asked if there are several applications and all of them need to implement RabbitMQ client I would create a library/module/package that can be easily imported to applications and configurable.
I suppose you are talking about situation when there is one publisher and many different applications (microservices) that process messages from a message queue. It's worth noticing that we are talking about applications that have different business purpose but not about many instances of the same application.
I would recommend to follow the next suggestions:
One queue is always contains messages of one type. In other words when you deserialize json you should know exactly what class it should be.
In most cases one microservice == one VS solution == one repository.
You would like to share the class to deserialize json between microservices. For this purpose you can create a nuget package with it's interface. This nuget should contains DTO classes without any business logic. Usually we call this nuget "*.Contracts".
The main feature of microservice architecture is simplicity of system modification and those advices should help you to keep it simple from one side and prevent a hell of having absolutely unstructured system (3rd advice) from another side.
One more notice about a case when there is one publisher and one consumer. This can happen when you want to process data in background (for instance, some process can take a lot of time but your website should answer to client immediately. Customer in this case usually is a webjob). For this cases we are using one solution (repository) with both publisher and consumer for simplicity development.
Do you think it might be reasonable to replace my service layer or service classes with MediatR? For example, my service classes look like this:
public interface IEntityService<TEntityDto> where TEntityDto : class, IDto
{
Task<TEntityDto> CreateAsync(TEntityDto entityDto);
Task<bool> DeleteAsync(int id);
Task<IEnumerable<TEntityDto>> GetAllAsync(SieveModel sieveModel);
Task<TEntityDto> GetByIdAsync(int id);
Task<TEntityDto> UpdateAsync(int id, TEntityDto entityDto);
}
I want to achieve some sort of modular design so other dynamically loaded modules
or plugins can write their own notification or command handlers for my main core application.
Currently, my application is not event-driven at all and there's no easy way for my dynamically loaded plugins to communicate.
I can either incorporate MediatR in my controllers removing service layer completely or use it with my service layer just publishing notifications so my plugins can handle them.
Currently, my logic is mostly CRUD but there's a lot of custom logic going on before creating, updating, deleting.
Possible replacement of my service would look like:
public class CommandHandler : IRequestHandler<CreateCommand, Response>, IRequestHandler<UpdateCommand, Response>, IRequestHandler<DeleteCommand, bool>
{
private readonly DbContext _dbContext;
public CommandHandler(DbContext dbContext)
{
_dbContext = dbContext;
}
public Task<Response> Handle(CreateCommand request, CancellationToken cancellationToken)
{
//...
}
public Task<Response> Handle(UpdateCommand request, CancellationToken cancellationToken)
{
//...
}
public Task<bool> Handle(DeleteCommand request, CancellationToken cancellationToken)
{
///...
}
}
Would it be something wrong to do?
Basically, I'm struggling what to choose for my logic flow:
Controller -> Service -> MediatR -> Notification handlers -> Repository
Controller -> MediatR -> Command handlers -> Repository
It seems like with MediatR I can't have a single model for Create, Update and Delete, so one way to re-use it I'd need to derive requests like:
public CreateRequest : MyDto, IRequest<MyDto> {}
public UpdateRequest : MyDto, IRequest<MyDto> {}
or embed it in my command like:
public CreateRequest : IRequest<MyDto>
{
MyDto MyDto { get; set; }
}
One advantage of MediatR is the ability to plug logic in and plug it out easily which seems like a nice fit for modular architecture but still, I'm a bit confused how to shape my architecture with it.
Update: I'm preserving the answer, but my position on this has changed somewhat as indicated in this blog post.
If you have a class, let's say an API controller, and it depends on
IRequestHandler<CreateCommand, Response>
What is the benefit of changing your class so that it depends on IMediator,
and instead of calling
return requestHandler.HandleRequest(request);
it calls
return mediator.Send(request);
The result is that instead of injecting the dependency we need, we inject a service locator which in turn resolves the dependency we need.
Quoting Mark Seeman's article,
In short, the problem with Service Locator is that it hides a class' dependencies, causing run-time errors instead of compile-time errors, as well as making the code more difficult to maintain because it becomes unclear when you would be introducing a breaking change.
It's not exactly the same as
var commandHandler = serviceLocator.Resolve<IRequestHandler<CreateCommand, Response>>();
return commandHandler.Handle(request);
because the mediator is limited to resolving command and query handlers, but it's close. It's still a single interface that provides access to lots of other ones.
It makes code harder to navigate
After we introduce IMediator, our class still indirectly depends on IRequestHandler<CreateCommand, Response>. The difference is that now we can't tell by looking at it. We can't navigate from the interface to its implementations. We might reason that we can still follow the dependencies if we know what to look for - that is, if we know the conventions of command handler interface names. But that's not nearly as helpful as a class actually declaring what it depends on.
Sure, we get the benefit of having interfaces wired up to concrete implementations without writing the code, but the savings are trivial and we'll likely lose whatever time we save because of the added (if minor) difficulty of navigating the code. And there are libraries which will register those dependencies for us anyway while still allowing us to inject abstraction we actually depend on.
It's a weird, skewed way of depending on abstractions
It's been suggested that using a mediator assists with implementing the decorator pattern. But again, we already gain that ability by depending on an abstraction. We can use one implementation of an interface or another that adds a decorator. The point of depending on abstractions is that we can change such implementation details without changing the abstraction.
To elaborate: The point of depending on ISomethingSpecific is that we can change or replace the implementation without modifying the classes that depend on it. But if we say, "I want to change the implementation of ISomethingSpecific (by adding a decorator), so to accomplish that I'm going to change the classes that depend on ISomethingSpecific, which were working just fine, and make them depend on some generic, all-purpose interface", then something has gone wrong. There are numerous other ways to add decorators without modifying parts of our code that don't need to change.
Yes, using IMediator promotes loose coupling. But we already accomplished that by using well-defined abstractions. Adding layer upon layer of indirection doesn't multiply that benefit. If you've got enough abstraction that it's easy to write unit tests, you've got enough.
Vague dependencies make it easier to violate the Single Responsibility Principle
Suppose you have a class for placing orders, and it depends on ICommandHandler<PlaceOrderCommand>. What happens if someone tries to sneak in something that doesn't belong there, like a command to update user data? They'll have to add a new dependency, ICommandHandler<ChangeUserAddressCommand>. What happens if they want to keep piling more stuff into that class, violating the SRP? They'll have to keep adding more dependencies. That doesn't prevent them from doing it, but at least it shines a light on what's happening.
On the other hand, what if you can add all sorts of random stuff into a class without adding more dependencies? The class depends on an abstraction that can do anything. It can place orders, update addresses, request sales history, whatever, and all without adding a single new dependency. That's the same problem you get if you inject an IoC container into a class where it doesn't belong. It's a single class or interface that can be used to request all sorts of dependencies. It's a service locator.
IMediator doesn't cause SRP violations, and its absence won't prevent them. But explicit, specific dependencies guide us away from such violations.
The Mediator Pattern
Curiously, using MediatR doesn't usually have anything to do with the mediator
pattern. The mediator pattern promotes loose coupling by having objects interact with a mediator rather than directly with each other. If we're already depending on an abstraction like an ICommandHandler then the tight coupling that the mediator pattern prevents doesn't exist in the first place.
The mediator pattern also encapsulates complex operations so that they appear simpler from the outside.
return mediator.Send(request);
is not simpler than
return requestHandler.HandleRequest(request);
The complexity of the two interactions is identical. Nothing is "mediated." Imagine that you're about to swipe your credit card at the grocery store, and then someone offers to simplify your complex interaction by leading you to another register where you do exactly the same thing.
What about CQRS?
A mediator is neutral when it comes to CQRS (unless we have two separate mediators, like ICommandMediator and IQueryMediator.) It seems counterproductive to separate our command handlers from our query handlers and then inject a single interface which in effect brings them back together and exposes all of our commands and queries in one place. At the very least it's hard to say that it helps us to keep them separate.
IMediator is used to invoke command and query handlers, but it has nothing to do with the extent to which they are segregated. If they were segregated before we added a mediator, they still are. If our query handler does something it shouldn't, the mediator will still happily invoke it.
I hope it doesn't sound like a mediator ran over my dog. But it's certainly not a silver bullet that sprinkles CQRS on our code or even necessarily improves our architecture.
We should ask, what are the benefits? What undesirable consequences could it have? Do I need that tool, or can I obtain the benefits I want without those consequences?
What I am asserting is that once we're already depending on abstractions, further steps to "hide" a class's dependencies usually add no value. They make it harder to read and understand, and erode our ability to detect and prevent other code smells.
Partly this was answered here: MediatR when and why I should use it? vs 2017 webapi
The biggest benefit of using MediaR(or MicroBus, or any other mediator implementation) is isolating and/or segregating your logic (one of the reasons its popular way to use CQRS) and a good foundation for implementing decorator pattern (so something like ASP.NET Core MVC filters). From MediatR 3.0 there's an inbuilt support for this (see Behaviours) (instead of using IoC decorators)
You can use the decorator pattern with services (classes like FooService) too. And you can use CQRS with services too (FooReadService, FooWriteService)
Other than that it's opinion-based, and use what you want to achieve your goal. The end result shouldn't make any difference except for code maintenance.
Additional reading:
Baking Round Shaped Apps with MediatR
(which compares custom mediator implementation with the one MediatR provides and porting process)
Is it good to handle multiple requests in a single handler?
I need to fetch data from an external API, only accessible via VPN.
The development/test machine will not always be able to connect to the VPN.
The desired behaviour is to use two different implementations (one that calls the actual external API and one that acts as the real thing but returns dummy data). Which implementation to use will be configured via a flag in web.config
I've tried the IoC containers StructureMap and Unity and they both did the job but they only seem to be applicable for MVC, I'm looking for a generic solution that also works for web forms. And also, isn't it a bit overkill to use them for this isolated design problem!?
Is there a design pattern or best practice approach for this particular scenario?
IoC / dependency injection sounds like the correct approach, but you don't necessarily need a container for a simple scenario. The key is to have classes that depend on the API reference an interface IAPI, and pass it the actual implementation RealAPI or FakeAPI.
public class SomeClass
{
private readonly IAPI _api;
public SomeClass(IAPI api)
{
_api = api;
}
}
Now you should be able to switch out the implementation easily by passing a different object to MyClass. In theory, when you're using an IoC approach, you should only need to bind the interface to the implementation once, at the top level of the application.
isn't it a bit overkill to use them for this isolated design problem!?
They probably are. Those IoC containers only help you when you wrote loosly coupled code. If you didn't design your classes according to the SOLID principles for instance, those frameworks will probably only be in the way. On the other hand, which developer doesn't want to write loosly coupled code? In other words, IoC container solves a problem you might not have but it's a nice problem to have.
StructureMap and Unity [...] only seem to be applicable for MVC
Those ioc frameworks can be used in any type of application (as long as it is written in loosly coupled way). Some types of applications need a bit more work to plug a framework in, but it's always possible. StructureMap and Unity might only have integration packages for MVC, it's quite easy to use them in ASP.NET Web Forms as well.
Is there a design pattern or best practice approach for this
particular scenario?
What you're looking for is the Proxy pattern and perhaps the circuit breaker pattern.
I've started playing with Ninject and from a screencast it states the following is how you set up a binding:
class MyModule : StandardModule {
public override void Load() {
Bind<IInterface>().To<ConcreteType>();
// More bindings here...
}
}
This is all very good.
However suppose you have one hundred objects used in an application. That would mean this would have one hundred bindings. Is this correct?
Secondly, I presume that given such an application it may be split into subsystems such as GUI, Database, Services and so on.
Would you then create a custom module for each subsystem which in turn would be:
GUIModule
DatabaseModule
ServiceModule
...
For each module you'd have the correct bindings that they required. Am I on the right page here?
Finally would this binding all occur in Main or the entry point for your application?
However suppose you have one hundred
objects used in an application. That
would mean this would have one hundred
bindings. Is this correct?
One hundred registered components, yes, but not necessarily registered one by one. There's a Convention extension for Ninject that allows you to scan assemblies and register types based on some defined rules. See this test as an example.
Would you then create a custom module
for each subsystem
Again, not necessarily. You might just want to register all your repositories (just to name something) in a single convention registration.
For each module you'd have the correct
bindings that they required.
As with any "module" (be it assembly, class, application) the concepts of coupling and cohesion apply here as well. It's best practice to keep coupling low (don't depend too much on other modules) and cohesion high (all components within a module must serve towards a common goal)
Finally would this binding all occur
in Main or the entry point for your
application?
Yes, see this related question.