I have a project with the following structure:
Main Rest
Main Services
Second Rest
Second Services
Third Rest
Third Services
Fourth....
Data Layer
etc
In terms of dependency, all Rest layers can access the Main Services and their own Service eg Second Rest can access Second and Main Services but not Third, and so on.
The issue I am facing now is that increasingly Second Services needs to access parts of the code in Third Services and so on.
The temptation is to move that code from Second Services to Main Services-but this does not seem right to me and is making me question if this separation and if it is useful.
I also dont think it is quite right to move all the code in one big Service and have everything access it from there.
One thing I did think of was an Inter Project Service layer that can provide the link between the Service Layers, but realistically I would need one of these for each of the Services to avoid circular references.
Finally, to complicate matters further I have to thing about Dependency Injection and the different implementations of interfaces between projects.
Has anyone experienced this kind of issue with a good solution?
Refactoring the whole project is not an option as I have to consider business needs and it does not make sense to do this financially.
I had experience refactoring big pile of mud, where everything was entangled. One thing that I've found most helpful was to invert dependencies for functions / methods too. If Rest2 calls Service2.Foo which wants data from Service1, the it's Rest2's job to bring the data.
To further enforce this approach, combat temptation to write and use various utility and convenience functions, that do cross-service calls for you, or at least keep these function private for each module.
This somewhat aligns with idea of CQS, extrapolated even further: each function is either doing some job independently or orchestrates whole team (as big as needed) of workers.
The noticeable downside of this approach is increased size of "flattened" top-level functions, but in my case it was much smaller problem than entanglement.
Ideally, this approach should yield set of modules organized in hierarchy of layers, where no module is allowed to call it's siblings or higher layers. Non-cyclic data flows are much easier to reason about!
Related
Application Services in DDD are supposed to orchestrate full business use cases, using Repositories to fetch Aggregates, calling methods on the Aggregates and managing infrastructure concerns like database transactions.
When reading books from Eric Evans, Vaughn Vernon and Scott Millett, you can find great examples on how separate your projects. But I never found clear answers for this situation.
Suppose you have a Domain, and three "entry points" to communicate with this domain:
Rest API for synchronous actions
Messenger "daemon" / "service" running on the OS for asynchronous actions
Powershell cmdlets for administrative users for maintenance actions
where do you place those Application Services if you have one DLL per entry point for deployment purpose?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
Option B: Application Services located in each entry point's DLL.
In the first option, you can benefit from code reuse when multiple entry points share the same use cases. Same thing for unit tests. However, you theoretically have to deploy an Application Service DLL having too much features for some entry points.
In the second option, you have to duplicate code (and tests) in each entry point's dll when they share the same use cases, but you can theoretically have the control on infrastructure concern like database transaction that could be different depending the execution is in a Powershell Cmdlet on in an API.
In my opinion, the real answer is a question of personal preference.
Anyone having experience with both approaches (success or failure) have some tips or recommandations?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
This is roughly what I would expect to see. You have three composition roots here, that should always share the same model (to ensure that all paths enforce the current business invariant) and the same book of record (if they don't share the same book of record, they really don't need to share anything at all).
In fact, I strongly suspect that you could separate these completely -- run "the model" in a "microservice", and deploy your three interfaces above that each uses a common service client DLL to talk to that core service.
You might, for instance, review the onion architecture. It aligns fairly closely with the image of a single dll for the application services, with each of your compositions roots using a different interface to adapt their own API to that of the model.
you theoretically have to deploy an Application Service DLL having too much features for some entry points.
That's so; there's a trade off there. My guess is that in most deployments, shipping a single fat DLL is going to be more cost effective than trying to deploy multiple jars with different subsets of the same model.
Personally, I'd start with a fat microservice, a well designed API, and fat clients in each of the composition roots above, and then if necessarily replace the fat clients with thinner, more specialized ones if the trade offs support that choice.
Just to be sure I understand one of your point. Are you suggesting that my domain (what you called "the model") should expose an API, and my different entry points (what you called "composition root") should call this API?
Yes, that's a fair description of the proposal, except I want to be more clear on the "should expose an API" part. The API should be explicit. That is to say, looking at the code, you should be able to point to a seam in your code where the separation of concerns happens
This part is where the model lives
That part is where the specialization lives
Your option B is (provided you make the seam explicit) is this idea within a single library. Your option A is this idea, with seam as the interface between two libraries (still running in the same process). Microservices is this idea, with the two libraries running in different processes.
You get different tradeoffs - for instance, if the model runs in a dedicated microservice, then (a) changing the model is "easy", because there's exactly one authority to swap out, and (b) you now have the freedom to implement your specialized interfaces in any technology that can exchange messages with your domain service, (c) you can also scale out the model independently of how you scale out the specializations.
But you also get additional complexity, in that you need to think more about the stability of the API when the client and server have independent deployment cycles.
Starting with ASP.Net 5, I wanted to lay the foundation to my project. As of now, I created 2 projects.
Project - The WebApi project that comes with a Startup class.
Project.Server - A dll project that will hold all the business logic.
At first I though I should write a Bootstrapper class in "Project.Server" that will allow me to hide many parts of that dll (that "Project" doesn't need to know about), but then I found myself thinking I may be doing some extra work; In "Project"'s Startup class I'm calling many of my Bootstrapper class.
Does this extra layer of abstraction needed in a WebApi project?
Although "Project.Server" is currently only referenced in "Project", but I still want to structure is correctly...
Different people will have different opinions on how to structure your web app. Personally, for me, it's a matter of how much work is involved. If it's fairly easy for you to separate out your business logic into a separate DLL, then do it. Even though there may not be any immediate advantages now (since Project is the only consumer of Project.Server), in the future, if you ever decide there needs to be another consumer of the business logic, it will be a lot easier to make that work. However, if it's a lot of work to create this extra layer, then I'd say it's not worth it, since you can't really predict what the future might bring, and so why spend a ton of effort trying to code for a future that is unknown.
currently trying to think of a strategy for implementing services in the business layer. My first approach was to implement a service functionality per class, but the number of functionalities will eventually grow and become hard to call from presentation layer since id have to remember them all (Large amounts of classes). The opposite alternative would be to have one single class with all services implemented which would created a gigantic file.
I've seen implementations that implement functionalities(methods) inside each a class such has (ProductBLL ou CompanyBLL) which would make the services more manageable, however some services such as "getmeProductsAndCompanies" which are somewhat frequent doesn't seem to belong neither to ProductBLL nor CompanyBLL.
My question is: Is it good idea to make a class AplicationService that has a method per Service that instantiates the correct ServiceClass and correct method? My goal with this was to instantiate in PL AplicationService as and as.getmeProductsAndCompanies()
The internet material i passed through so far has very theoretical or very complex solutions. I am open to suggestions too.
My first approach was to implement a service functionality per class,
but the number of functionalities will eventually grow and become hard
to call from presentation layer since id have to remember them all
(Large amounts of classes)
I do not think aggregating all services into a single facade will help matters. It will only complicate them. Consider instead structuring services and devising some naming pattern for them.
For example, you have OrderService that does everything to orders (bad name choice, btw ;) ). Eventually it grows too big, and when this happens, you must split it in two. When splitting, you must use functional approach to naming. The name of the service must answer the question "What does this service do exactly and with what types of data". For example, OrderDisplayService looks like a good choice to me.
When you must find out which service you need to inject into your governor entity (MVC-like controller, usually), you must first type service namespace name (\Acme\Services\), then the object name you want to deal with (Order), then type a verb, describing what exactly you want to do with it (Display) and then press your IDE autocomplete buttons. You will have a relatively short list of services, available for injection (I suppose you use some IoC container for that).
Split your services into layers or units so that when you work in IDE, you could see only a functionally complete part of them in the currently expanded directory
Use composite pattern. Basically, you create as many small classes / functions as you can. Then those parts will be called by bigger class/function. Then the bigger class can also be used by some more bigger class again.
I am wondering about the long term advantages (if any) of layering my web app by separating my business logic and data from my web forms. (ie a form, business logic, data not in the same file, but each in it's own class in another folder by itself or combined with other like classes). I like to make everything as modular as possible and to do so efficiently it seems that keeping all the code in one file - in the web form makes organization and reuse much easier. There are certain functions that are used across the site like managing connections that would be in their own classes and files. I am pretty new to c#, sorry if I am messing up the terminology.
Thanks
The separation of code into layers brings benefits beyond just C# language.
If your data access code is kept in a separate layer, it will be easy to adjust it to work with a different database. Database-specific code will be encapsulated in this layer while clients will work with database-agnostic interfaces. Therefore changes here will not affect the business layer implementation.
If your business logic is kept in one place, you will be able to offer its services to other applications, for example, to serve requests made via web services.
If your code is clean and well structured, the maintenance efforts will be kept lower. Whenever you need to change something, you'll know where to find the responsible code, what to change and how to assure the change will not affect the rest of the code.
As for ASP.NET, not following the separation of concerns has caused many projects to turn into a giant code blurb - presentation code performs business decisions, code-behind talks directly to the database whenever no suitable business method exists, database gets written to from many places, dataflow is following multiple paths which are difficult to trace, changes in one path not introduced to all of them will break integrity and cause data corruption => Result? Almost unmaintainable code black box where any change requires more and more effort until it stalls - project is "finished". Technical bankruptcy.
We usually layer our application as follows (each of the layer is in a separate project of the solution and consequently in a separate Dll:
What I would always go for (first) is to have a layered application
Presentation Layer (JUST UI and databinding logic)
Interface Layer to
the Business Layer (defining the
contracts for accessing the BL)
Business Layer implementation (the
actual logic, data validation etc...)
Interface Layer to the Data Access
Layer (defining the contracts for
accessing the DAL)
Data Access Layer
implementation
You can then use some factory for retrieving the corresponding objects. I would take a look at some library, possibly using dependency injection like Spring.Net or Microsoft Unity from the MS patterns and practices.
The advantages are the following:
separation of logic where it belongs to
no business logic in the UI (developers have to pay attention to this)
all of your applications look the same and consequently developers knowing this architecture will immediately know where to search for the corresponding logic
exchangeable DAL. The interfaces define the contracts for accessing the corresponding layer.
Unit testing becomes easier, just focusing on the BL logic and DAL
Your application could have many entry points (web interface, Winforms client, webservice). All of them can reference the same business logic (and DAL).
...
Just could not live without that..
Other than testability, what's the big advantage of utilizing D.I. (and I'm not talking about a D.I. framework or IoC) over static classes? Particularly for an application where you know a service won't be swapped out.
In one of our c# application, our team is utilizing Dependency Injection in the web web GUI, the service layer, and the repository layer rather than using static methods. In the past, we'd have POCOs (busines entity objects) that were created, modified, passed around, and saved by static classes.
For example, in the past we might have written:
CreditEntity creditObj = CreditEntityManager.GetCredit(customerId);
Decimal creditScore = CreditEntityManager.CalculateScore(creditObj);
return creditScore;
Now, with D.I., the same code would be:
//not shown, _creditService instantiation/injection in c-tors
CreditEntity creditObj = _creditService.GetCredit(customerId);
Decimal creditScore = _creditService.CalculateScore(creditObj);
return creditScore;
Not much different, but now we have dozens of service classes that have much broader scope, which means we should treat them just as if they were static (i.e. no private member variables unless they are used to define further dependencies). Plus, if any of those methods utilize a resource (database/web service/etc) we find it harder to manage concurrency issues unless we remove the dependency and utilize the old static or using(...) methods.
The question for D.I. might be: is CreditEntityManager in fact the natural place to centralize knowledge about how to find a CreditEntity and where to go to CalculateScore?
I think the theory of D.I. is that a modular application involved in thing X doesn't necessarily know how to hook up with thing Y even though X needs Y.
In your example, you are showing the code flow after the service providers have been located and incorporated in data objects. At that point, sure, with and without D.I. it looks about the same, even potentially exactly the same depending on programming language, style, etc.
The key is how those different services are hooked up together. In D.I., potentially a third party object essentially does configuration management, but after that is done the code should be about the same. The point of D.I. isn't to improve later code but to try and match the modular nature of the problem with the modular nature of the program, in order to avoid having to edit modules and program logic that are logically correct, but are hooking up with the wrong service providers.
It allows you to swap out implementations without cracking open the code. For example, in one of my applications, we created an interface called IDataService that defined methods for querying a data source. For the first few production releases, we used an implementation for Oracle using nHibernate. Later, we wanted to switch to an object database, so we wrote and implementation for db4o, added its assembly to the execution directory and changed a line in the config file. Presto! We were using db4o without having to crack open the code.
This has been discussed exactly 1002 times. Here's one such discussion that I remember (read in order):
http://scruffylookingcatherder.com/archive/2007/08/07/dependency-injection.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-More-than-a-testing-seam.aspx
http://kohari.org/2007/08/15/defending-dependency-injection
http://scruffylookingcatherder.com/archive/2007/08/16/tilting-at-windmills.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-IAmDonQuixote.aspx
http://scruffylookingcatherder.com/archive/2007/08/20/poking-bears.aspx
http://ayende.com/Blog/archive/2007/08/21/Dependency-Injection-Applicability-Benefits-and-Mocking.aspx
About your particular problems, it seems that you're not managing your services lifestyles correctly... for example, if one of your services is stateful (which should be quite rare) it probably has to be transient. I recommend that you create as many SO questions about this as you need to in order to clear all doubts.
There is a Guice video which gives a nice sample case for using D.I. If you are using a lot of 3-rd party services which need to be hooked upto dynamically D.I will be a great help.