Should I separate RabbitMQ consumer from the rest of the code? - c#

When implementing, let's say, a service to handle email messages, should I put the code of the RabbitMQ subscriber to the separate process from the main program? Or should it be in the common codebase?
Are there any drawbacks of putting them together?
We develop a microservice-oriented application with .Net Core 3.
We have to use a messaging bus to let many services react to some events published by another services. We use RabbitMQ.
We've already tried to create two separate applications communicating over HTTP. One listens for the new messages and triggers webhooks on another. The second one does all the stuff.
I expect some advice on how to organize the code. Would the common codebase be easier to maintain in future? Is the network requests timing overhead really important?

We wrote a wrapper that we use in our microservices and applications to abstract away the implementation details of RabbitMQ. One of the key things it handles is tracking Subscribed Events and their associated Handlers. So, the application can define/inject a handler class and it automatically gets called whenever a matching message arrives.
For us, we treat the actual messaging implementation as a cross-cutting concern so we package and use it (Nuget package) just like any other, it's a completely separate project from everything else. It's actually a public Nuget package too, feel free to play with it you want (it's not well documented though).
Here's a quick example of how ours works so you can see the level of integration:
In Startup.cs
using MeshIntegrationBus.RabbitMQ;
public class Startup
{
public RabbitMqConfig GetRabbitMqConfig()
{
ExchangeType exchangeType = (ExchangeType)Enum.Parse(typeof(ExchangeType),
Configuration["RabbitMQ:IntegrationEventExchangeType"], true);
var rabbitMqConfig = new RabbitMqConfig
{
ExchangeName = Configuration["RabbitMQ:IntegrationEventExchangeName"],
ExchangeType = exchangeType,
HostName = Configuration["RabbitMQ:HostName"],
VirtualHost = Configuration["RabbitMQ:VirtualHost"],
UserName = Configuration["RabbitMQ:UserName"],
Password = Configuration["RabbitMQ:Password"],
ClientProviderName = Configuration["RabbitMQ:ClientProviderName"],
Port = Convert.ToInt32(Configuration["RabbitMQ:Port"])
}
return rabbitMqConfig
}
public void ConfigureServices(IServicecollection services)
{
services.Add.... // All your stuff
// If this service will also publish events, add that Service as well (scoped!)
services.AddScoped<IMeshEventPublisher, RabbitMqEventPublisher>(s =>
new RabbitMqEventPublisher(rabbitConfig));
// Since this service will be a singleton, wait until the end to actually add it
// to the servicecollection - otherwise BuildServiceProvider would duplicate it
RabbitMqListener rabbitMqListener = new RabbitMqListener(rabbitConfig,
services.BuildServiceProvider());
var nodeEventSubs = Configuration.GetSection(
$"RabbitMQ:SubscribedEventIds:ServiceA").Get<string[]>();
// Attach our list of events to a handler
// Handler must implement IMeshEventProcessor and does have
// access to the IServiceProvider so can use DI
rabbitMqListener.Subscribe<ServiceAEventProcessor>(nodeEventSubs);
services.AddSingleton(rabbitMqListener);
}
}
That's really all there is to it. You can add multiple handlers per subscribed event and/or reuse the same handler(s) for multiple events. It's proven pretty flexible and reliable - we've been using it for a couple years - and it's easy enough to change/update as needed and let individual services pull the changes when they want/need to.

I recently designed and developed a software that communicating with other components via AMQP implemented by RabbitMQ.
As my matter of design I wrote a reusable service class which is being called by other service classes. That is happening in service layer or you can say business layer.
But as you asked if there are several applications and all of them need to implement RabbitMQ client I would create a library/module/package that can be easily imported to applications and configurable.

I suppose you are talking about situation when there is one publisher and many different applications (microservices) that process messages from a message queue. It's worth noticing that we are talking about applications that have different business purpose but not about many instances of the same application.
I would recommend to follow the next suggestions:
One queue is always contains messages of one type. In other words when you deserialize json you should know exactly what class it should be.
In most cases one microservice == one VS solution == one repository.
You would like to share the class to deserialize json between microservices. For this purpose you can create a nuget package with it's interface. This nuget should contains DTO classes without any business logic. Usually we call this nuget "*.Contracts".
The main feature of microservice architecture is simplicity of system modification and those advices should help you to keep it simple from one side and prevent a hell of having absolutely unstructured system (3rd advice) from another side.
One more notice about a case when there is one publisher and one consumer. This can happen when you want to process data in background (for instance, some process can take a lot of time but your website should answer to client immediately. Customer in this case usually is a webjob). For this cases we are using one solution (repository) with both publisher and consumer for simplicity development.

Related

Cross module communication in modular monolith

I have been learning about modular monolith project structure in this article: https://codewithmukesh.com/blog/modular-architecture-in-aspnet-core
Most of it makes sense to me but something I don't quite get is:
Cross Module communication can happen only via Interfaces/events/in-memory bus. Cross Module DB Writes should be kept minimal or avoided completely.
How exactly does that cross-module communication look?
Let's say I have 3 modules:
Product
User
Security
My security module registers an endpoint for DisableUser. It's this endpoint's job to update a User and every Product associated with the user with a disabled status.
How does the Security module call User & Product update status method in a unit of work?
My understanding is that this pattern is intended to make it easier to extract a module to a microservice at a later date so I guess having it as a task of some sort makes it easier to change to a message broker but I am just not sure how this is supposed to look.
My example is obviously contrived, my main point is how do modules communicate together when read/writes are involved?
Theory
There are lot of misunderstandings about terminology in such questions, so let's mark 2 completely different architectures - monolith architecture and microservices architecture. So one architecture that stands between these both is a modular monolith architecture.
Monolith architecture mostly has a huge problem - high coupling and low cohesion because you have no strong methods to avoid it. So programmers decide to think about new ways of building different architectures to make really hard to fall down in high coupling low cohesion problem.
Microservices architecture was a solution (despite other problems it solve too). Main point in microservices architecture is all about separation services from each other to avoid high coupling (because it is not so easy to setup communication between services as in monolith architecture).
But programmers can't move from one architecture to completely different in "one click", so one (but not only one) way to build microservices architecture from monolith architecture is to make modular monolith first (just solve high coupling low cohesion problem but in monolith) and then extract modules to microservices easily.
Communication
To made coupling low we should focus on communication between services.
Lets work with sample you put in your question.
Imagine we have this monolith architecture:
We definitely see high coupling problem here. Let's say we want to build it more modular. To make that, we need to add something between modules to separate them from each other, also we want modules to communicate, so the only thing we must to add is a bus.
Something like that:
P.S. Is could be completely separated not im-memory bus (like kafka or rabbitmq)
So your main question was about how to make communication between modules, there are few ways to do that.
Communication via interfaces (synchronous way)
Modules could call each other directly (synchronously) through interfaces. Interface is an abstraction, so we don't know what stands behind that interface. It could be mock or real working module. It means that one module doesn't know nothing about other modules, it knows only about some interfaces it communicate with.
public interface ISecurityModule { }
public interface IUserModule { }
public interface IProfileModule { }
public class SecurityModule : ISecurityModule
{
public SecurityModule(IUserModule userModule) { } // Does not know about UserModule class directly
}
public class UserModule : IUserModule
{
public UserModule(IProfileModule profileModule) { } // Does not know about ProfileModule class directly
}
public class ProfileModule : IProfileModule
{
public ProfileModule(ISecurityModule securityModule) { } // Does not know about SecurityModule class directly
}
You can communicate between interfaces through methods call with no doubt but this solution doesn't help well to solve high coupling problem.
Communication via bus (asynchronous way)
Bus is a better way to build communication between modules because it forces you use Events/Messages/Commands to make communication. You can't use methods call directly anymore.
To achieve that you should use some bus (separated or in-memory library). I recommend to check other questions (like this) to find proper way to build such communication for your architecture.
But be aware - using bus you make communication between modules asynchronous, so it forces you to rewrite inner module behaviour to support such communication way.
About your example with DisableUser endpoint. SecurityModule could just send command/event/message in bus that user was disabled in security module - so other services could handle this command/event/message and "disable" it using current module logic.
What's next
Next is a microservice architecture with completely separated services communicating through separated bus with separated databases too:
Example
Not long time ago I've done project completely in microservices architecture after course.
Check it here if you need good microservices architecture example.
Images were created using Excalidraw
First glance, I think one approach is to use Mediator events since the project already uses that. It would work well and keep everything separate.
To define Mediator event check this.
You define your events in shared core, for your example:
public class UserDisabled : INotification
{
public string UserId { get; set; }
}
From the User modules you will publish the event when the user get disabled
await mediator.Publish(new UserDisabled{UserId = "Your userId"});
And Finally declare event handlers in every modules that need to react to the event
public class UserDisabledHandler : INotificationHandler<UserDisabled>
{
public UserDisabledHandler()
{
//You can use depency injection here
}
public Task Handle(UserDisabled notification, CancellationToken cancellationToken)
{
throw new NotImplementedException();
}
}
However it is worth noting that this won't work if you want to switch to actual micro-services. I'm not very familiar with micro services, but I think you need some form of event bus and that's where micro-services becomes complicated.
There is information about that in this Microsoft book.

specifying events as files to live on Core project instead of service project

In my solution I have 3 projects
MyProject.Core
MyProject.Services.DataImporter
Myproject.Services.Cars
Both DataImporter and Cars projects are referencing MyProject.Core project.
I have an event (DataImportFinishedEvent) which is emmited by DataImporter service.
Services.Cars is subscribed to this event and potentially more services later.
With my current approach, I have this event (DataImportFinishedEvent) as a file created
on both services.
Since both services are referencing Core project should I move this event to the Core project? Doing so will have file on one location only.
Is this a good microservice practice?
In general have common projects or libraries is not a good practice in microservices because this way you are coupling the develop and deploy of both services, so when you make a change in the common project, you have to change and deploy the two other microservices.
In the case of the event, the best way is to have different events in both of the services. This not necessarily means that they have to be duplicated. In the producer side you must have an event with all the information needed by any of the potential consumers and in the consumer side you must have an event with the information needed by that service, that can be less.
This way, you decouple the two services and if tomorrow the consumer service need other information provided by the producer you only have to change the consumer side and vice-versa
One way to think about it is when you are consuming a third party api that you are not controlling, you build your own response object with the data provided by the api that you need and this object is different that the one used by the api.

Microservices design part in WebApi

Hi I am trying to create a project skeleton uses CQRS pattern and some external services. Below are the structure of the solution.
WebApi
Query Handlers
Command Handlers
Repository
ApiGateways ( here is the interfaces and implementation of microservice calls)
We want to keep controller as thin. So we are using query handlers and command handlers to handle respective operations.
However, we use a external microservices to get the data we are calling them from Query handlers.
All the http clinet construction and calls will be abstracted in them.The response will be converted to a view model and pass it back to Query handler.
We name it as a ApiGateways. But it is not composing from multiple services.
How do we call this part in our solution? Proxy or something? Any good example for thin controllers and microservice architecture
We name it as API Gateways. But it is not composed from multiple
services. How do we call this part in our solution? Proxy or
something? Any good example for thin controllers and microservice
architecture
Assumption:
From the image you attached, I see Command Handler and Query Handler are calling "external/micro-services". I guess by this "external/micro-services" you mean that you are calling another micro-service from your current micro-service Handler(Command and Query). These "external/micro-services" are part of your architecture and deployed on the same cluster and not some external system that just exposes a public API?
If this is correct I will try to answer based on this assumption.
API Gateway would probably be misleading in this case as the concept of API Gateway is something different then what you are trying to do here.
API Gateway per definition:
Quote from here:
An API Gateway is a server that is the single entry point into the
system. It is similar to the Facade pattern from object-oriented
design. The API Gateway encapsulates the internal system architecture
and provides an API that is tailored to each client. It might have
other responsibilities such as authentication, monitoring, load
balancing, caching, request shaping and management, and static
response handling.
What you actually are trying to do is to call from your Command or Query Handler from one of your micro-service A another micro-service B. This is an internal micro-service communication that should not be done through API Gateway as that would be the approach for outside calls. For example, with "outside calls" I mean frontend application API or public API calls that are trying to call your micro-services. In that case, you would use API Gateways.
A better name for this component would be something like "CrossMicroServiceGateway" or "InterMicroServiceGateway"; if you want to have it as the full CQRS way you could have it like a direct call to other Command or Query and then you could use something like "QueryGate" or "CommandGate" or similar.
Other suggestions:
WebApi
Query Handlers
Command Handlers
Repository
API Gateways ( here is the interfaces and implementation of
microservice calls)
This sounds reasonable except for the point of API Gateway which I described above. Of course, it is hard for me to tell based on the limited information that I have about your project. To give you a more precise suggestion here I would need to know whether you use DDD or not? How do you use CQRS and other information?
However, we use an external microservices to get the data we are
calling them from Query handlers. All the HTTP client construction and
calls will be abstracted in them. The response will be converted to a
view model and pass it back to Query handler.
You could extract all this code/logic that handles the cross micro-service communication over HTTP or other protocols, handling general responses and similar to some core library and include it into each of your micro-service as a package. In this way, you will reuse the solution for all your micro-service. You can extend that and add all core domain-agnostic things (like data access or repository base classes, wrappers around HTTP, unit-test infrastructure setup, and similar) to that or other shared libraries. This way your micro-services will only focus on the part of the Domain it is supposed to do.
I think CQRS is the right choice to keep the reading and writing operations decoupled.
The integration with third party systems (if it's the case), need some attention.
Do not call these services directly from your handlers, this could lead to various performance and/or maintainability issues.
You have to keep these integrations very well separated, because them are out of your domain. They may be subject to inefficiencies, changes or a number of problems out of your control.
One solution that I could recommend is a "Middleware" service.
In your application context this can be constituted by another service (always REST for example) that will have the task of 'talk' (only him) with external systems, acting as a single point of integration between your domain and the external environment. This can be realized from scratch or using a commercial/opens-source solution like (just as example) this.
This lead to many benefits, same of these:
A middleware is a unique mockable point during integration test of your application.
You can change the middleware implementation in the future without touch your handlers.
Of course, changing 3pty providers won't affect your domain services.
Middleware is the unique point dedicated to manage 3pty service interruptions.
Your services remain agnostic compared to the outside world.
Focus on these questions can be useful to design your integration middleware service:
Which types of 3pty data do they provide? Are they on time? This might help you figure out whether to introduce a cache system into your integration service.
Can 3pty be subject to frequent interruptions? Then you must ensure that your system must tolerate any disruption of external services. In other words, you must ensure a certain resilience of your services. There are many techniques to do that.
Do you really need to interrogate these 3pty services all the time? Maybe a more or less sophisticated cache system could speed up your services a lot.
Finally, it is also very important to understand if the need to have a microservices-oriented system is a real and immediate need.
Due to the fact these architectures are more expensive and complex then the classic ones, it might be reasonable to think about starting by building a monolith system and then moving towards a more segmented solution later.
Thinking (organizing) your system as many "bounded context" does not prevent you from creating a good monolith system and at the same time, it prepares you for a possible switch to microservices-oriented one.
As a summary advice, start by keeping things as separate as possible. Define a language to speak about your business model. These lead to you potentially change a lot when needs will come without to much effort during the inevitable evolution of your software. "Hexagonal" architecture is a good starting point to do that for both choises (Microservices vs Monolith).
Recently, Netflix posted a nice article about this architecture with a lot of ideas for a fresh start.
I will give my answer from DDD and the clean architecture perspective. Ideally, you application should have following layers.
Api (ideally very thin layer of Controllers).The controller will create queries and commands and push them on a common channel. (refer MediatR)
Application This will be your orchestration layer. It will contain definitions of queries and command and their handlers. For queries, you will directly interact form your infrastructure layer. For commands, you will interact with domain and then save them through repositories in infrastructure.
Domain Depends upon your business logic and complexity, this layer will contain all your business models.
Infrastructure It will contain mostly two types of objects Providers and Repositories. Providers should be used with queries and will return DAO. Repositories should be used where ever domain is involved, ideally commands in CQRS. Repositories should always receive and return only domain objects.
So after setting the base context about different layers on clean architecture, the answer to your original question is --> I would create third party interactions in the provider layer. For example, you need to connect with a user microservice, I will create a UserProvider in the provider folder in the infrastructure layer and consume it through a interface.

how to cache data in a stateful service?

The cluster needs access to a dataset that lives in sql server, that is outside of the cluster.
Rather than forcing remote calls to the database for every request, I would like to create a stateful service that will periodically refresh its cache with data from the remote database.
Would we be looking at something like this following?
internal sealed class StatefulBackendService : StatefulService
{
public StatefulBackendService(StatefulServiceContext context)
: base(context)
{
}
/// <summary>
/// Optional override to create listeners (like tcp, http) for this service instance.
/// </summary>
/// <returns>The collection of listeners.</returns>
protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
{
return new ServiceReplicaListener[]
{
new ServiceReplicaListener(
serviceContext =>
new KestrelCommunicationListener(
serviceContext,
(url, listener) =>
{
ServiceEventSource.Current.ServiceMessage(serviceContext, $"Starting Kestrel on {url}");
return new WebHostBuilder()
.UseKestrel()
.ConfigureServices(
services => services
.AddSingleton<IReliableStateManager>(this.StateManager)
.AddSingleton<StatefulServiceContext>(serviceContext))
.UseContentRoot(Directory.GetCurrentDirectory())
.UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.UseUniqueServiceUrl)
.UseStartup<Startup>()
.UseUrls(url)
.Build();
}))
};
}
}
Within this stateful service, how would I load data from a remote database and serve it through controllers?
Let's assume we have a simple model:
Create table Account (varchar name, int key)
I imagine that the operations would be in the following order:
Load Account table into memory
respond to requests such as http://statefulservice/account?$top=10
refresh data in the service on a time interval basis
What are the datatypes that I should be using in order to cache this data? What would be the process of loading the data into the stateful service from a sql server database>?
IMHO, even though it's possible to use Statefull services as a cache backed up by some database, the real power comes when you keep your data in the reliable collections only. With Service Fabric and Reliable Collections, you can store data directly in your service without the need for an external persistent store. See Application scenarios. Aside from providing high availability and low latency, the state is reliably replicated across multiple nodes so it could survive a node failure, and moreover, there is a Back up and restore feature that allows you to deal even with the entire cluster outage.
There are many things you should know about when dealing with Reliable Services. Service partiotioning, Transactions and lock modes, Guidelines and recommendations, etc.
As for the data types, explore Reliable Collection object serialization and Serialization and Upgrade.
Another thing you also should be aware of, is the Reliable Dictionary periodically removes least recently used values from memory, which could increase read latencies in certain cases. See more here - Service fabric reliable dictionary linq query very slow.
A simple example of integrating controllers and StateManager you could find in this article.
l--''''''---------''''''''''''
Here's a little more info related to your comment...
Hey m8... the reliable collections are designed to run multiple instances (the run on more than one node at a time)... Within each instance the data is partitioned into one or more groups (how you decide to partition is entirely up to you)... So there is load distribution and fail over, there is more to say... but I don't want to muddy the waters so I'm attempting to keep it high level. This type of service data in reliable collections exists in memory and can be "backed up"... If you want your data formally written to disc and have more control over WHEN it is written to disc you will need to take a look at Actors. This is a good (very simple) collection of examples of service fabric, reliable collections, and wiring up internal communications. Only think funky about looking at this one is there are a lot of different 'recipes' used to facilitate back-end and communication from the back-end to the public (stateless) side.
I see you added to your question and changed the intent a little... I will pointedly tell you what I 'think' you need for what you are really after... You want one or multiple 'Stateful Service's (this is your data service layer, this can be abstracted into 3 components if you want... the stateful service itself, and 2 class libraries one for your service interface and one for your contracts ... or rather your data models... basically this is a POCO), you would include the 2 class libraries in your stateful service and use them to create dictionary entries (probably something like new IReliableDictionary... and bind the interface. You will want to use (add to) the IService interface (you will need to grab a nuget package 'Service Fabric Remoting' for the interface project you created for your service interface, there is plenty of info out there on how to achieve remoting within service fabric as it is a standard communication method. There is more, but simply building this would be a viable experiment and would effectively take the place or you database. You can formally persist the data to disc using Actors or a simple backup method that is canned with service fabric. Essentially I suggest you build this in order to firm up the fact you can completely remove the database from this scenario... you really don't need it. What I have described above takes the place of the db ONLY... without writing a front-end for this (that uses remoting to communicate with your backend) this would not be accessible to the public... at least not easily.
TL;DR - Basically I'm agreeing with what one of your other contributors is stating... My opinion is less humble so I'll simple state it. You application will be less complicated, faster and more reliable if you handle your data within service fabric... Still TL;DR? - Ditch the db my man. If you are really nervous about it only existing in memory, use Actors

Where should I place business logic when using RavenDB

I am planning on building a single page application(SPA) using RavenDB as my data store.
I would like to start with the ASP.NET Hot Towel template for the SPA piece.
I will remove the EntityFramework/WebApi/Breeze components and replace with RavenDB for storage and ServiceStack for building the backend API.
Most current opinions seems to frown upon using any sort of repository or additional abstraction on top of RavenDB and call for using the RavenDB API directly inside of controllers(in an MVC app)
I am assuming I should follow the same wisdom when using Raven with ServiceStack and make calls against IDocumentSession directly inside of my service implementations.
My concern lies with the fact that it seems my service implementation will become rather bloated by following this path. It also seems that I will often need to write the same code multiple times, for example, if I need to update a User document within several different web service endpoints.
It also seems likely that I will need to access Raven from other (future) pieces of my application. For example, I may need to add a console application that processes jobs from a queue in the future, and this piece of the app may need to access data within Raven...but from the start, my only path to Raven will be through the web service API. Would I just plan to call the web api from this theoretical console app? Seem inefficient if they are potentially running on the same hardware.
Can anyone offer any advice on how to utilize Raven effectively within my webservices and elsewhere while still following best practices when using this document store? It would seem practical to create a middle business logic tier that handles calls against raven directly...allowing my webservices to call methods within this tier. Does this make sense?
EDIT
Can anyone provide any recent samples of similar architecture?
FWIW, we're currently working on an app using ServiceStack and RavenDB. We're using a DDD approach and have our business logic in a rich Domain Layer. The architecture is:
Web App. Hosts the web client code (SPA) and the service layer.
Service Layer. Web services using ServiceStack with clean/fairly flat DTOs that are completely decoupled from the Domain objects. The Web Services are responsible for managing transactions and all RavenDB interaction. Most 'Command-ish' service operations consist of: a) Load domain object(s) (document(s)) identified by request, b) Invoke business logic, c) Transform results to response DTOs. We've augmented ServiceStack so that many Command-ish operations use an automatic handler that does all the above without any code required. The 'Query-ish' service operations generally consist of: a) Executing query(ies) against RavenDB, b) Transforming the query results to response DTOs (in practice this is often done as part of a), using RavenDB during query processing/indices/transformers). Business logic is always pushed down to the Domain Layer.
Domain Layer. Documents, which correspond to 'root aggregates' in DDD-speak, are completely database agnostic. They know nothing of how they are loaded/saved etc. Domain objects expose public GETTERs only and private SETTERs. The only way to modify state on domain objects is by calling methods. Domain objects expose public methods that are intended to be utilised by the Service Layer, or protected/internal methods for use within the domain layer. The domain layer references the Messages assembly, primarily to allow methods on our domain objects to accept complex request objects and avoid methods with painfully long parameter lists.
Messages assembly. Standalone assembly to support other native .Net clients such as unit-tests and integration tests.
As for other clients, we have two options. We can reference ServiceStack.Common and the Messages assembly and call the web services. Alternatively, if the need is substantially different and we wish to bypass the web services, we could create a new client app, reference the Domain Layer assembly and the Raven client and work directly that way.
In my view the repository pattern is an unnecessary and leaky abstraction. We're still developing but the above seems to be working well so far.
EDIT
A greatly simplified domain object might look something like this.
public class Order
{
public string Id { get; private set; }
public DateTime Raised { get; private set; }
public Money TotalValue { get; private set; }
public Money TotalTax { get; private set; }
public List<OrderItem> Items { get; private set; }
// Available to the service layer.
public Order(Messages.CreateOrder request, IOrderNumberGenerator numberGenerator, ITaxCalculator taxCalculator)
{
Raised = DateTime.UtcNow;
Id = numberGenerator.Generate();
Items = new List<OrderItem>();
foreach(var item in request.InitialItems)
AddOrderItem(item);
UpdateTotals(taxCalculator);
}
private void AddOrderItemCore(Messages.AddOrderItem request)
{
Items.Add(new OrderItem(this, request));
}
// Available to the service layer.
public void AddOrderItem(Messages.AddOrderItem request, ITaxCalculator taxCalculator)
{
AddOrderItemCore(request);
UpdateTotals(taxCalculator);
}
private void UpdateTotals(ITaxCalculator taxCalculator)
{
TotalTax = Items.Sum(x => taxCalculator.Calculate(this, x));
TotalValue = Items.Sum(x => x.Value);
}
}
There's two main parts to consider here.
Firstly, as you have already noted, if you go by the word of the more fanatical RavenDB fans it is some mythical beast which is exempt from the otherwise commonly accepted laws of good application design and should be allowed to permeate throughout your application at will.
It depends on the situation of course but to put it simply, if you would structure your application a certain way with something like SQL Server, do the same with RavenDB. If you would have a DAL layer, ORM, repository pattern or whatever with a SQL Server back-end, do the same with RavenDB. If you don't mind leaky abstractions, or the project is small enough to not warrant abstracting your data access at all, code accordingly.
The main difference with RavenDB is that you're getting a few things like unit of work and the ORM for 'free', but the overall solution architecture shouldn't be that different.
Second, connecting other clients. Why would a console app - or any other client for that matter - access your RavenDB server instance any differently to your web site? Even if you run the server embedded mode in your ASP.NET application, you can still connect other clients to it with the same RavenDB.Client code. You shouldn't need to touch the web service API directly.

Categories