What is considered the appropriate development for .asmx or wcf service classes regarding how many files, lines of code, responsibilities, etc? Do most people publish separate .asmx service files for the different crud methods for every class?
Generally speaking, a service should encapsulate a set of common operations. Regardless of whether you use ASMX or WCF, you shouldn't be creating one "service" for each operation. The general idea behind service-oriented architecture (SOA) is to model real-world business behavior. To give you a dumb, but hopefully effective, example...think of a waitress at a restaurant. The waitress provides a service to customers in the form of taking orders, serving those orders, providing drink refills, providing condiments, and finally handling payment. The service the waitress offers is not a single operation, its an aggregation of related operations.
However, it doesn't stop there. The true nature of SOA is that any given service is likely to rely on other services. The waitress can not do her job without relying on the services of the cook, to provide meals, the person serving the counter, where she can get instances of condiments and drink, and the services provided by the restaurant building itself. There are also some fundamental differences between the kind of service provided by a waitress, and that provided by a cook. To bring it down to technical programming terms...a Waitress is a task service, but a Cook is an entity (or CRUD) service. The waitress handles higher level operations that provide useful functionality to clients, while the cook handles lower level operations that provide fine-grained and complex functionality only to other employees of the restaurant.
I can't really give you a specific answer to your question, other than to say just organize your services however they logically fit. Its probably not a good practice to have one operation per service...however, it is not unheard of for a service to have just one operation. Task services often have just one operation. Entity services often have many operations, usually CRUD based, but sometimes additional services. There are also Utility services that provide lowest level, infrastructural operations (back to the restaurant, utility services would be like stoves, grills, the register, etc.) If you model your services after actual business concepts, then the operations they expose and their dependencies on each other should eventually become clear.
For some GREAT information on SOA, check out the SOA series by Thomas Erl (Prentice Hall), as they are the definitive resource for implementing a service-oriented enterprise.
First of all, the best practice for new development is to use WCF. See Microsoft: ASMX Web Services are a “Legacy Technology”.
Second, in SOA, one tries to create services with coarsly-grained operations. For instance, you would want an OrderProduct operation, rather than StartOrder, AddLineItem, AddOption, FinishOrder operations. The OrderProduct operation might accept an OrderDTO as follows:
public class OrderDTO {
public CustomerInfo Customer {get;set;}
public DateTime OrderTime {get;set}
public DateTime ShipTime {get;set;}
public List<LineItemDTO> LineItems {get; private set;}
}
public class LineItemDTO {
public int LineItemNumber {get;set;}
public string ProductName {get;set;}
public int Quantity {get;set}
public Decimal Amount {get;set}
public Decimal ExtendedAmount {get;set;}
}
Rather than a StartOrder method that just creates an empty order, followed by AddLineItem calls to add individual line items (as you might do from a desktop application), I'm recommending a single OrderProduct method that accepts an OrderDTO, which will have a collection of LineItemDTO. You'll send the entire order all at once, add all the pieces in a transaction, and be done.
Finally, I'd say that you should still separate into business and data layers. The service layer should be concerned only with the services side of things, and will call on your business logic layer in order to get things done.
Grab this book from wherever you can:
Service Oriented Architecture (SOA): Concepts, Technology, and Design
Answers each of your questions and many many more you're bound to run into during your implementation.
Related
I have been learning about modular monolith project structure in this article: https://codewithmukesh.com/blog/modular-architecture-in-aspnet-core
Most of it makes sense to me but something I don't quite get is:
Cross Module communication can happen only via Interfaces/events/in-memory bus. Cross Module DB Writes should be kept minimal or avoided completely.
How exactly does that cross-module communication look?
Let's say I have 3 modules:
Product
User
Security
My security module registers an endpoint for DisableUser. It's this endpoint's job to update a User and every Product associated with the user with a disabled status.
How does the Security module call User & Product update status method in a unit of work?
My understanding is that this pattern is intended to make it easier to extract a module to a microservice at a later date so I guess having it as a task of some sort makes it easier to change to a message broker but I am just not sure how this is supposed to look.
My example is obviously contrived, my main point is how do modules communicate together when read/writes are involved?
Theory
There are lot of misunderstandings about terminology in such questions, so let's mark 2 completely different architectures - monolith architecture and microservices architecture. So one architecture that stands between these both is a modular monolith architecture.
Monolith architecture mostly has a huge problem - high coupling and low cohesion because you have no strong methods to avoid it. So programmers decide to think about new ways of building different architectures to make really hard to fall down in high coupling low cohesion problem.
Microservices architecture was a solution (despite other problems it solve too). Main point in microservices architecture is all about separation services from each other to avoid high coupling (because it is not so easy to setup communication between services as in monolith architecture).
But programmers can't move from one architecture to completely different in "one click", so one (but not only one) way to build microservices architecture from monolith architecture is to make modular monolith first (just solve high coupling low cohesion problem but in monolith) and then extract modules to microservices easily.
Communication
To made coupling low we should focus on communication between services.
Lets work with sample you put in your question.
Imagine we have this monolith architecture:
We definitely see high coupling problem here. Let's say we want to build it more modular. To make that, we need to add something between modules to separate them from each other, also we want modules to communicate, so the only thing we must to add is a bus.
Something like that:
P.S. Is could be completely separated not im-memory bus (like kafka or rabbitmq)
So your main question was about how to make communication between modules, there are few ways to do that.
Communication via interfaces (synchronous way)
Modules could call each other directly (synchronously) through interfaces. Interface is an abstraction, so we don't know what stands behind that interface. It could be mock or real working module. It means that one module doesn't know nothing about other modules, it knows only about some interfaces it communicate with.
public interface ISecurityModule { }
public interface IUserModule { }
public interface IProfileModule { }
public class SecurityModule : ISecurityModule
{
public SecurityModule(IUserModule userModule) { } // Does not know about UserModule class directly
}
public class UserModule : IUserModule
{
public UserModule(IProfileModule profileModule) { } // Does not know about ProfileModule class directly
}
public class ProfileModule : IProfileModule
{
public ProfileModule(ISecurityModule securityModule) { } // Does not know about SecurityModule class directly
}
You can communicate between interfaces through methods call with no doubt but this solution doesn't help well to solve high coupling problem.
Communication via bus (asynchronous way)
Bus is a better way to build communication between modules because it forces you use Events/Messages/Commands to make communication. You can't use methods call directly anymore.
To achieve that you should use some bus (separated or in-memory library). I recommend to check other questions (like this) to find proper way to build such communication for your architecture.
But be aware - using bus you make communication between modules asynchronous, so it forces you to rewrite inner module behaviour to support such communication way.
About your example with DisableUser endpoint. SecurityModule could just send command/event/message in bus that user was disabled in security module - so other services could handle this command/event/message and "disable" it using current module logic.
What's next
Next is a microservice architecture with completely separated services communicating through separated bus with separated databases too:
Example
Not long time ago I've done project completely in microservices architecture after course.
Check it here if you need good microservices architecture example.
Images were created using Excalidraw
First glance, I think one approach is to use Mediator events since the project already uses that. It would work well and keep everything separate.
To define Mediator event check this.
You define your events in shared core, for your example:
public class UserDisabled : INotification
{
public string UserId { get; set; }
}
From the User modules you will publish the event when the user get disabled
await mediator.Publish(new UserDisabled{UserId = "Your userId"});
And Finally declare event handlers in every modules that need to react to the event
public class UserDisabledHandler : INotificationHandler<UserDisabled>
{
public UserDisabledHandler()
{
//You can use depency injection here
}
public Task Handle(UserDisabled notification, CancellationToken cancellationToken)
{
throw new NotImplementedException();
}
}
However it is worth noting that this won't work if you want to switch to actual micro-services. I'm not very familiar with micro services, but I think you need some form of event bus and that's where micro-services becomes complicated.
There is information about that in this Microsoft book.
I am planning on building a single page application(SPA) using RavenDB as my data store.
I would like to start with the ASP.NET Hot Towel template for the SPA piece.
I will remove the EntityFramework/WebApi/Breeze components and replace with RavenDB for storage and ServiceStack for building the backend API.
Most current opinions seems to frown upon using any sort of repository or additional abstraction on top of RavenDB and call for using the RavenDB API directly inside of controllers(in an MVC app)
I am assuming I should follow the same wisdom when using Raven with ServiceStack and make calls against IDocumentSession directly inside of my service implementations.
My concern lies with the fact that it seems my service implementation will become rather bloated by following this path. It also seems that I will often need to write the same code multiple times, for example, if I need to update a User document within several different web service endpoints.
It also seems likely that I will need to access Raven from other (future) pieces of my application. For example, I may need to add a console application that processes jobs from a queue in the future, and this piece of the app may need to access data within Raven...but from the start, my only path to Raven will be through the web service API. Would I just plan to call the web api from this theoretical console app? Seem inefficient if they are potentially running on the same hardware.
Can anyone offer any advice on how to utilize Raven effectively within my webservices and elsewhere while still following best practices when using this document store? It would seem practical to create a middle business logic tier that handles calls against raven directly...allowing my webservices to call methods within this tier. Does this make sense?
EDIT
Can anyone provide any recent samples of similar architecture?
FWIW, we're currently working on an app using ServiceStack and RavenDB. We're using a DDD approach and have our business logic in a rich Domain Layer. The architecture is:
Web App. Hosts the web client code (SPA) and the service layer.
Service Layer. Web services using ServiceStack with clean/fairly flat DTOs that are completely decoupled from the Domain objects. The Web Services are responsible for managing transactions and all RavenDB interaction. Most 'Command-ish' service operations consist of: a) Load domain object(s) (document(s)) identified by request, b) Invoke business logic, c) Transform results to response DTOs. We've augmented ServiceStack so that many Command-ish operations use an automatic handler that does all the above without any code required. The 'Query-ish' service operations generally consist of: a) Executing query(ies) against RavenDB, b) Transforming the query results to response DTOs (in practice this is often done as part of a), using RavenDB during query processing/indices/transformers). Business logic is always pushed down to the Domain Layer.
Domain Layer. Documents, which correspond to 'root aggregates' in DDD-speak, are completely database agnostic. They know nothing of how they are loaded/saved etc. Domain objects expose public GETTERs only and private SETTERs. The only way to modify state on domain objects is by calling methods. Domain objects expose public methods that are intended to be utilised by the Service Layer, or protected/internal methods for use within the domain layer. The domain layer references the Messages assembly, primarily to allow methods on our domain objects to accept complex request objects and avoid methods with painfully long parameter lists.
Messages assembly. Standalone assembly to support other native .Net clients such as unit-tests and integration tests.
As for other clients, we have two options. We can reference ServiceStack.Common and the Messages assembly and call the web services. Alternatively, if the need is substantially different and we wish to bypass the web services, we could create a new client app, reference the Domain Layer assembly and the Raven client and work directly that way.
In my view the repository pattern is an unnecessary and leaky abstraction. We're still developing but the above seems to be working well so far.
EDIT
A greatly simplified domain object might look something like this.
public class Order
{
public string Id { get; private set; }
public DateTime Raised { get; private set; }
public Money TotalValue { get; private set; }
public Money TotalTax { get; private set; }
public List<OrderItem> Items { get; private set; }
// Available to the service layer.
public Order(Messages.CreateOrder request, IOrderNumberGenerator numberGenerator, ITaxCalculator taxCalculator)
{
Raised = DateTime.UtcNow;
Id = numberGenerator.Generate();
Items = new List<OrderItem>();
foreach(var item in request.InitialItems)
AddOrderItem(item);
UpdateTotals(taxCalculator);
}
private void AddOrderItemCore(Messages.AddOrderItem request)
{
Items.Add(new OrderItem(this, request));
}
// Available to the service layer.
public void AddOrderItem(Messages.AddOrderItem request, ITaxCalculator taxCalculator)
{
AddOrderItemCore(request);
UpdateTotals(taxCalculator);
}
private void UpdateTotals(ITaxCalculator taxCalculator)
{
TotalTax = Items.Sum(x => taxCalculator.Calculate(this, x));
TotalValue = Items.Sum(x => x.Value);
}
}
There's two main parts to consider here.
Firstly, as you have already noted, if you go by the word of the more fanatical RavenDB fans it is some mythical beast which is exempt from the otherwise commonly accepted laws of good application design and should be allowed to permeate throughout your application at will.
It depends on the situation of course but to put it simply, if you would structure your application a certain way with something like SQL Server, do the same with RavenDB. If you would have a DAL layer, ORM, repository pattern or whatever with a SQL Server back-end, do the same with RavenDB. If you don't mind leaky abstractions, or the project is small enough to not warrant abstracting your data access at all, code accordingly.
The main difference with RavenDB is that you're getting a few things like unit of work and the ORM for 'free', but the overall solution architecture shouldn't be that different.
Second, connecting other clients. Why would a console app - or any other client for that matter - access your RavenDB server instance any differently to your web site? Even if you run the server embedded mode in your ASP.NET application, you can still connect other clients to it with the same RavenDB.Client code. You shouldn't need to touch the web service API directly.
I am using .NET 4 to create a small client server application for a customer. Should I create one giant service that implements many contracts (IInvoice, IPurchase, ISalesOrder, etc) or should I create many services running one contract each on many ports? My questions specifically is interested in the pros/cons of either choice. Also, what is the common way of answering this question?
My true dilemma is that I have no experience making this decision, and I have little enough experience with wcf that I need help understanding the technical implications of such a decision.
Don't create one large service that implements n-number of service contracts. These types of services are easy to create, but will eventually become a maintenance headache and will not scale well. Plus, you'll get all sorts of code merging conflicts if there's a development group competing for check-ins/check-outs.
Don't create too many services either. Avoid the trap of making your services too fine-grained. Try to create services based on a functionality. The methods exposed by these services shouldn't be fine-grained either. You're better off having fewer methods that do more. Avoid creating similar functions like GetUserByID(int ID), GetUserByName(string Name) by creating a GetUser(userObject user). You'll have less code, easier maintenance and better discoverability.
Finally, you're probably only going to need one port no matter what you do.
UPDATE 12/2018
Funny how things have changed since I wrote this. Now with the micro-services pattern, I'm creating a lot of services with chatty APIs :)
You would typically create different services for each main entity like IInvoice, IPurchase, ISalesOrder.
Another option is to seperate queries from commands. You could have a command service for each main entity and implement business operations accepting only the data they need in order to perform the operation (avoid CRUD-like operations); and one query service that returns the data in the format required by the client. This means that the command part uses the underlying domain model/business layer; while the query service directly operates on the database (bypassing the business, which is not needed for querying). This simplifies your querying a lot and makes it more flexible (return only what the client needs).
In real time applications you have one service contract for each entity like Invoice, Purchase and SalesOrder will have separate ServiceContract
However for each service contract there will be heterogeneous clients like Invoice will be called by backoffice through windows application using netNamedPipeBinding or netTcpBinding and same time client application needs to call the service using basicHttpBinding or wsHttpBindings. Basically you need to create multiple endpoints for each service.
Its seems that you are mixing between DataContract(s) and ServiceContract(s).
You can have one ServiceContract and many DataContract(s) and that would perfectly suit your needs.
The truth is that splitting up WCF services - or any services is a balancing act. The principle is that you want to to keep downward pressure on complexity while still considering performance.
The more services you create, the more configuration you will have to write. Also, you will increase the number of proxy classes you need to create and maintain on the client side.
Putting too many ServiceContracts on one service will increase the time it takes to generate and use a proxy. But, if you only end up with one or two Operations on a contract, you will have added complexity to the system with very little to gain. This is not a scientific prescription, but a good rule of thumb could be say about 10-20 OperationContracts per ServiceContract.
Class coupling is of course a consideration, but are you really dealing with separate concerns? It depends on what your system does, but most systems deal with only a few areas of concern, so splitting things up may not actually decrease class coupling that much anyway.
Another thing to remember, and this is ultra important is to always make your methods as generic as possible. WCF deals in DataContracts for a reason. DataContracts mean that you can send any object to and from the server so long as the DataContracts are known.
So, for example, you might have 3 OperationContracts:
[OperationContract]
Person GetPerson(string id);
[OperationContract]
Dog GetDog(string id);
[OperationContract]
Cat GetCat(string id);
But, so long as these are all known types, you could merge these in to one operation like:
[OperationContract]
IDatabaseRecord GetDatabaseRecord(string recordTypeName, string id);
Ultimately, this is the most important thing to consider when designing service contracts. This applies for REST if you are using a DataContract serialization like serialization method.
Lastly, go back over your ServiceContracts every few months and delete operations that are not getting used by the clients. This is another big one!
You should take the decision based the load expected, extensibility needed and future perspective. As you wrote " small client server application for a customer" it is not giving clear idea of intended use of the development in hand. Mr. Big's answer must be considered too.
You are most welcome to put forward further question backed with specific data or particulars about the situation in hand. Thanks.
I'm consuming a SOAP web service that creates a separate service point and WSDL for each of its customers. I don't know why the do that. But e.g. if they have two clients A and B, the service designates two different service addresses with different WSDL addresses. These separate WSDLs are 90% the same objects and same functions but some of them are different based on the type of the customer. Therefore the created objects are eventually not the same even though they work exactly the same way.
So in order to fetch the correct service, I store the name of the customer somewhere on a table ("A" or "B") and my program has to know which customer its dealing with every run. I don't want to have different programs for each customer. I just want my program to get the customer name and based on that understand which model and which controller functions it will use.
What is the design pattern(s) that will help me facilitate this issue?
Chances are, in the future there will be an additional customer, so I want my code to be as loosely-coupled as it gets.
I have always wanted to use design patterns correctly in my code so I guess it's time to do so. Should I use a Strategy Pattern? Can you briefly explain what is the best solution for this?
I would use two design patterns in your case. The first one would be the Facade pattern. Use the facade pattern to simplify the interface of the web services your application has to deal with. Make sure you only need to change the implementation of the facade when the webservice contract changes. Convert the objects from the service into objects under your control and call functions with names and parameters that fit your domain and abstraction level.
The second design pattern would be the Adapter pattern. In your case, you should determine if you can decide on a common interface for both web services. So, if the 10% difference between the two services can be converted into one interface (of objects and/or functions) you use in your application.
The facade would use adapters to convert the 10% difference into common objects and functions. After that, the facade uses the common objects and functions, as well as the other 90% of the web services, to supply a proper abstraction layer for your application.
If there are additional customers in the future, you'll most likely only need to add or modify an adapter.
I have a multi tiered SOA application and a database with over 100 tables. I'm using entity framework for my data layer which takes care of all CRUD operations.
I have 1 facade class which is hosted on a service, callable by client apps all over.
This facade class contains methods such as
private void DoSomething()
{
//insert to table1
//insert to table 2
//delete from table 3
//more CRUD operations
}
And the facade class is basically full with loads of other methods similar to DoSomething()
So the client will basically create an instance of the facade class, and gain access to all these methods.
My question now is, is this the best practice for a facade pattern? I feel that the facade class is way too "heavy" and I'm not sure if it will affect the performance if my application scales bigger.
Will creating an instance of the facade class be a very expensive operation if I have loads of methods in it?
Well, Facade is highly adequate for SOA architectures, once it encapsulates a subsystem as an object and provides an aggregated interface for all business objects and services to your client. It also reduces the coupling in your architecture. I think your approach isn't heavy, and won't affect the scalability of your system.
Will creating an instance of the facade class be a very expensive
operation if I have loads of methods in it?
Assuming those methods are not called during construction they should have no impact on the 'weight' of the object. My understanding is that a method that isn't doing anything doesn't cost memory or cycles for practical purposes.
(as a side note, there are technical limits that don't add value to your question. see How many methods can a C# class have)
Make your facade work for you! A good practice is to assume you're handing it over to someone else and ask "does this neatly describe what the capabilities of my API are?". Sixty sounds like it's reaching the upper-bounds of my personal preference but that's still only CRUD for 15 objects.
Think of the facade as a unified interface that frees you up to create more concise and sophisticated Services (which in turn should encapsulate even more concise repositories which in turn should encapsulate even more concise records of data in tables, BLOBS in buckets, etc.).