ASP.NET Boilerplate Domain Layer appears to contain non-Domain classes - c#

I have recently done some analysis of ASP.Net Boilerplate (https://aspnetboilerplate.com). I have noticed that the domain layer (MyProject.Core) has folders for the following (these are created by default):
Authorization
Confirguration
Editions
Features
Identity
Localization
MultiTenancy
etc
Why would you put all of this in the Domain Layer of an application? From what I can see; I believe most of this code should be found in the Application Layer (which could also be the service layer).

Good question, if you just look at the folder names. But I suppose you haven't investigated the source code in the folders much.
First of all, I don't say it's the best solution architecture. We are constantly improving it and we may have faults. Notice that our approach is a mix of best practices & pragmatic approach. I will try to explain it briefly.
You are talking about this project: https://github.com/aspnetboilerplate/module-zero-core-template/tree/master/aspnet-core/src/AbpCompanyName.AbpProjectName.Core So, let's investigate the folders:
Localization
It does not include any localization logic (it's done in framework level, in ABP. Thus, it's in infrastructure layer). It just defines localization texts.
While normally it can be easily moved to web layer (no direct dependency in Core project), we put it in the Core layer since we think it may be needed in another application too. Think that you have a Windows Service has only Reference to the .Core project and want to use localization texts, say to send email to a user in his own language. Notice that Windows Service should not have a reference to Web layer normally. So, we have a pragmatic approach here. We could add localization to another dll project, but that would make the solution more complicated.
Authorization
Mainly includes User, Role.. entities and UserManager and RoleManager domain classes. Similar to localization, it does not include actual authorization logic. It also includes some other classes but they do not make much. We thought putting these here would help us if we have more application layers. As you know every application can have it's own application layer as a best practice.
Confirguration
AppConfigurations is here to share 'configuration reading' code between different apps (Migrator and Web app). Again, this could be inside another "Shared Utils" library. But we wanted to keep solution structure balanced, so it reflects major layer and structures yet is not so complicated for intermediate level developers.
Editions
Just includes EditionManager class which is a domain service for Edition management.
Features
Just includes FeatureValueStore which is a repository-like adapter class. See it's code, it's already empty.
MultiTenancy
Includes Tenant entity and TenantManager class which are already parts of domain layer. Again, nothing here includes infrastructure-related multi-tenancy features (like data filtering or determining current tenant).
... and so on...
So, do not just see names and have idea, please check the project deeper. Some code can be moved to upper layers or an utils library, but I think general structure is good to start a DDD architected application.

What you see it is called Module Zero, it aims to implements all fundamental concepts of ASP.NET Boilerplate framework such as tenant management (multi-tenancy), role management, user management, session, authorization (permission management), setting management, language management, audit logging and so on.
Module-Zero defines entities and implements domain logic (domain layer) because it is part of the configuration context of your system.

Related

I have Domain Driven Design Concerns on my VS project

I am dealing with some architectural design concerns that is needed to be sorted out. My current architecture can be seen below. Each box is a project in visual studio, and they together forms solution.
My Core application is coded in WestCore.AppCore Context, and I have another project group called CSBINS (which includes system web service integrations) CSBINS is an merchant product that is why I found it better to seperate it to another project and only depend it with most commonly used interfaces from WestCore.AppCore.
Right now WestCore.Api does not have any logic in it. All the application logic is handled inside AppCore and AppCore.Csbins
The Problem is I sometimes have need to use WestCore.AppCore.Csbins services inside WestCore.AppCore which causes cross referencing issue.
the best approach right now that I think is to add Endpoint Services into WestCore.Api and move cross platform logic to Endpoint Services.
However I would like to get suggestions and design concerns about going further on this since I am very sure that there would be many design choices.
I am also considering to move common AppCore Interfaces and Classes to WestCore.AppCore.Common so that I wont need to reference whole WestCore.AppCore project to WestCore.AppCore.Csbins.
Why are you using services inside other services - this is probably a bad thing and needs refactoring.
Those CORE projects look like are application services projects, it might help calling them 'WestCore.ApplicationServices', Core implies it belongs at the domain level.
It sounds like you need to impliment an anti corruption layer to integrate with the 3rd party vendor rather than creating a whole new 'domain' context. This should be as straightforward as degining an interface in your domain layer (personally I use the *Gateway suffix to identifiy interfaces that interact with external systems)
Not knowing anything about your domain I would probably start with something that looks like this: (I've assumed the csbins is some sort of payment or accounting gateway)
Also, I would strongly recommend avoiding "Common" and "Shared" libraries at the domain level, you shouldn't need them. Your interfaces and classes are DOMAIN objects and belong in your DOMAIN library. The Application Services should be using domain models directly and having implementation of domain interfaces supplied via Dependency Injection. Hopefully your Domain Models are fleshed out enough that your application service classes are just orchestration wrappers.

DDD - Application Service location with multiple "entry points"

Application Services in DDD are supposed to orchestrate full business use cases, using Repositories to fetch Aggregates, calling methods on the Aggregates and managing infrastructure concerns like database transactions.
When reading books from Eric Evans, Vaughn Vernon and Scott Millett, you can find great examples on how separate your projects. But I never found clear answers for this situation.
Suppose you have a Domain, and three "entry points" to communicate with this domain:
Rest API for synchronous actions
Messenger "daemon" / "service" running on the OS for asynchronous actions
Powershell cmdlets for administrative users for maintenance actions
where do you place those Application Services if you have one DLL per entry point for deployment purpose?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
Option B: Application Services located in each entry point's DLL.
In the first option, you can benefit from code reuse when multiple entry points share the same use cases. Same thing for unit tests. However, you theoretically have to deploy an Application Service DLL having too much features for some entry points.
In the second option, you have to duplicate code (and tests) in each entry point's dll when they share the same use cases, but you can theoretically have the control on infrastructure concern like database transaction that could be different depending the execution is in a Powershell Cmdlet on in an API.
In my opinion, the real answer is a question of personal preference.
Anyone having experience with both approaches (success or failure) have some tips or recommandations?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
This is roughly what I would expect to see. You have three composition roots here, that should always share the same model (to ensure that all paths enforce the current business invariant) and the same book of record (if they don't share the same book of record, they really don't need to share anything at all).
In fact, I strongly suspect that you could separate these completely -- run "the model" in a "microservice", and deploy your three interfaces above that each uses a common service client DLL to talk to that core service.
You might, for instance, review the onion architecture. It aligns fairly closely with the image of a single dll for the application services, with each of your compositions roots using a different interface to adapt their own API to that of the model.
you theoretically have to deploy an Application Service DLL having too much features for some entry points.
That's so; there's a trade off there. My guess is that in most deployments, shipping a single fat DLL is going to be more cost effective than trying to deploy multiple jars with different subsets of the same model.
Personally, I'd start with a fat microservice, a well designed API, and fat clients in each of the composition roots above, and then if necessarily replace the fat clients with thinner, more specialized ones if the trade offs support that choice.
Just to be sure I understand one of your point. Are you suggesting that my domain (what you called "the model") should expose an API, and my different entry points (what you called "composition root") should call this API?
Yes, that's a fair description of the proposal, except I want to be more clear on the "should expose an API" part. The API should be explicit. That is to say, looking at the code, you should be able to point to a seam in your code where the separation of concerns happens
This part is where the model lives
That part is where the specialization lives
Your option B is (provided you make the seam explicit) is this idea within a single library. Your option A is this idea, with seam as the interface between two libraries (still running in the same process). Microservices is this idea, with the two libraries running in different processes.
You get different tradeoffs - for instance, if the model runs in a dedicated microservice, then (a) changing the model is "easy", because there's exactly one authority to swap out, and (b) you now have the freedom to implement your specialized interfaces in any technology that can exchange messages with your domain service, (c) you can also scale out the model independently of how you scale out the specializations.
But you also get additional complexity, in that you need to think more about the stability of the API when the client and server have independent deployment cycles.

C# Web APIs: package by feature

I would like to split my C# Web API project, such that a given feature (or a set of features) are maintained in separate projects. Ideally, I would still like to maintain layered separation as well within each feature package.
Example: I would like to ensure there is a separate API project for each main feature (i.e. a business suite would be separated into sales API, inventory API, payroll API etc. etc.). Each feature would be divided into API (top layer), Models (DTO/ViewModels sent and received from/to the API), Service (business logic) and Tests. There could be more layers, i.e. separate layers for entity classes etc.
There is a certain amount of shared code that must be reused within these projects, both on the top layer (such as error handling, logging etc.) and other layers as well (database connections, repositories...).
Does anyone have a good example of how to do this separation, such that everything is DRY, while maintaining a clear separation of features?
Best regards,
Daniel
What you're trying to achieve sounds very akin to a micro-service architecture. Here are some good links that describe what this means..
http://blog.cleancoder.com/uncle-bob/2014/09/19/MicroServicesAndJars.html
http://martinfowler.com/articles/microservices.html#CharacteristicsOfAMicroserviceArchitecture
The idea is to build your system in a modularised manor, where each component can talk to each other, usually over HTTP. Which seems to be what you want to achieve with having "features" each exposing an API. There is a whole heap of material on this so I'd read around it.
As for sharing code between them, this can be tricky. If you're thinking of this in terms of a modularised system, perhaps the shared stuff should be its own "feature"/"component"/"service"/"module" (whatever you want to call it). Or perhaps there is some stuff you just want to pull out into its own project- if so consider building a Nuget package to share common code across the components?

Where should logging logic sit within a DDD solution?

I've created a custom filter for my MVC application, [LogAttribute]. Action methods are decorated with this and it has the responsibility to create a LogEntry object to pass into some type of provider - ILoggerProvider.
My question is, where should ILoggerProvider and it's implementations sit (I'll be wanting to use a DI technology on it)? Should they go in the domain model, the UI project or a separate class?
Unless your software's primary function is Logging or Auditing, it should be an Infrastructure LoggingService.
And unless your logging implementation is tightly-coupled with your Domain Objects (I hope it isn't!), I would suggest a completely separate assembly.
I'd generally contend that ILoggingProvider should sit within the domain model for a few reasons. From a logistics and sanity perspective, your domain classes probably need to reference the logger. From a DDD perspective, given the world of SOX and such we live in, one can argue that logging is a core domain feature for regulatory compliance.
Now, the implementations can definitely sit off in your infrastructure projects, no need to clutter the model with all that.
Since logging has nothing to do with UI, this is definetly the wrong place.
The domain model is there for data representation in my opinion.
So i would do it in a seperate class or even a seperate project.
I have a MVC application where i have a seperate project for a logging service. In my structure it is in the most bottom layer (data access) since it directly logs to files and all other services are using it. I also use DI on it with use of the MEF framework.
This has been working fine for me for a while and i didn't want to change it since then.
I had other solutions which i skipped after a while because they were not as elegant to use as my current solution.
Hope that helps you with your decision.

Shared Object Library with Persistence

I have a quick question that I am hoping is fairly simple to answer. I am attempting to develop a shared Employee object library for my company. The idea is to create a centralized database that contains information about our employees (Reporting Hierarchy, Office Locations, General Info, etc) and then create an shared object library for this database.
My question is what is the best way to create this library so it can be shared among applications.
Do I create a self contained library that stores the database connection (I can see concurrency issues here and it doesn't feel right).
Client -> Server and then deploy the "client library" for use among any application.
OR would a Web/WCF service be more ideally suited to this situation.
There are many options because the question can be translated broadly. I suggest taking to heart all answers. Having said that here's my spin on it...
I used to view software layers as vertical because of n-tier training, and have a hard time breaking away from those notions to something conceptually broader and less restrictive. I strive to view .NET assembles as just pieces of a puzzle.
You're right to separate connection string from code and that's easily supported by .NET .config file, or application settings.
I often prefer a small, core library having the business logic, concepts and flows although each of those can be broken out. And within that concept you can still break out business from data access as different assemblies to swap in a new kind of data access. But sticking with the core module (a kind of "business kernel" or "engine" if you will).
You can express your "business kernel" through many presentation types, for example
textual/Console I-O
GUI: WinForms, WPF, Silverlight, ASP.NET, LED/pixelboard, etc
as cmdlets for Powershell interactions
web service expressions
kinds of mobile apps
etc.
You can accelerate development using patterns to bend software to your will and related implementations like: Microsoft Enterprise Library, loosen the coupling with dependency injection e.g. Ninject (one of many), or inversion of control techniques, etc.
I usually prefer to have a middle tier layer (so some sort of Web/WCF service between the client and the database). This way you separate the clients from the database, so that you can control the number of connections, or you can change the schema of the database in a way that will be transparent for the clients.
Depending on your situation, you can either make the clients connect to the WCF service (preferred in most cases), or create a dll that will wrap the connection to the service and perform some additional processing on the client side.
It depends how deep you need to integrate you library into main application. If you want to extend application domain with custom entities, you have following options:
Built-in persistence into library. You will need to pass connection string to repository class, but also database must include the hardcoded scheme for your library. If you use LINQ to SQL as data access library, you may mark up you entities with mapping attributes (see http://msdn.microsoft.com/en-us/library/system.data.linq.mapping.aspx)
Provide domain library only, but implement persistence outside, if your data layer supports POCO mapping (EF 4 do).
Usually, putting domain model into separated assembly causes few problems:
Integration into application. Application itself usually provides few services, like data access, security, logging, web services etc. If your application have ideal design and layers fully decoupled from each other, there is no problem to add new entities, but usually data access layer requires inheritance from base class, logger is singleton, security checks are hardcoded into business logic methods etc. Such applications must be refactored, services must be extracted into interfaces, and such interfaces must be passed to components in separated assembly.
Entity references. If you use rich domain model, you probably want to reference entities declared in another assembly . Partially this problem can be solved by generics, but you need to have special design of your data access layer that allows you to get lists of generic entities, or get entity by id etc.
Database integration. It may be hard to maintain database changes, if some entities are developed separately from others, espesially by other team.
Just be sure to keep your connection method separate from your data access layer, and then you can change the connection method later if requirements change. If you have a simple DLL that holds your real logic, then adding a communication layer on top should be simple. This will also allow you to use all three methods you mentioned and have all your actual logic in a single DLL used amongst all three.

Categories