Dependency Injection vs Layered Architecture - c#

I've been reading a lot about dependency injection and the service locator (anti-?) pattern - a lot of it on StackOverflow (thanks guys :). I have a question about how this pattern works when it's within a n-layer architecture.
I've seen a lot of blog posts where they describe injecting a IDataAccess component into the business objects. E.g.
public class Address
{
IDataAccess _dataAccess;
public Address(IDataAccess dataAccess)
{
this._dataAccess = dataAccess;
}
}
However, I was under the impression that in an n-layer architecture, the UI layer should not need to have any knowledge of the data access layer... or even know that there /is/ a data access layer! If DI requires exposing the IDataAccess interface in the constructors of the BusinessObjects, this then exposes to the UI the fact that the Business Layer uses a data access layer under the hood - something the UI doesn't need to know or care about surely?
So, my fundamental question is: Does DI require that I expose all my lower layer interfaces to all upper layers and is this a good or a bad thing?
Thanks
Edit: To clarify (after a few comments), I know my business object should be ignorant of the which specific implementation of which IDataAccess it uses (hence the Dependency being injected in the constructor) but I thought that the layers above the BO should not know that the Business Object even requires a dependency on a DAL.

This is really a fairly complex topic, and there are many ways of doing an n-tier architecture. No one way is "the right way", and how you do it depends on your needs as much as it does your personal preferences.
Dependency Injection is about managing dependencies. If your object should be unaware of any dependency, then you would not write your objet in the way you mentioned. You would instead have some other service or method that would populate the data in an agnostic way. Data doesn't mean "Database" either. So IDataAccess could mean it comes from a database, or it comes from a network socket or it comes from a file on disk. The whole point here is that Address does not choose what dependencies it creates. This is done through configuration at the composition root.
Things need data, otherwise your app is probably useless. Making your Address object load itself, however, may not be the best way to go about things. A better approach may be with a factory class or service method.

I think the answer is rather simple. Your bottom layers (interface, bll, dal, entities) are just a bunch of libraries. It is up to the client to decide which libraries to be used and it will increase client's flexibility. Moreover they are libraries, so any application-related configurations (connection strings, data caching, etc) lies on the client. Those configuration itself, sometimes also need to be injected and included into Composition Root.
However, if you want to has an uniform logic and not client's flexibility, you can choose web/app services as an additional layer.
1st Layer Entities
2nd Layer Interface
3rd Layer BLL & DAL
4th Layer Web/App Services
5th Layer UI
This way, your composition root exists in one layer (4th). And add your UI just need to add service reference to 4th layer (or 1st if needed). However, this implies the same Mark Seeman's article again, layering is worth the mapping. I assume that you can change the app/web service to Composition Root.
Moreover, this (app/web service) design has pros/cons. Pros:
Your app is encapsulated
Your app is being bridged by app/web services. It is guranteed that your UI don't know the DataAccess, thus fulfill your requirements.
Your app is secured
Simply said, having UI need to access app service is a huge gain in security aspect.
Access Portability
Now your app can be accessed everywhere. It can be connected by 3rd party app (other web) without has relying on dlls.
Cons:
Overhead cost during service call
Authentication, network connection, etc, will cause overhead during webservice call. I'm inexperienced for the performance impact but it should be enough for high traffic app.
Inflexibility of client
Client now need to access BLL/Services by using services instead of normal objects.
More Service for Different Type of Client
Now you need to provide more service than needed. Such as WebRequestRetriever, MobileRequestRetriever instead of accessing to a mere IRequestRetriever and let the composition root wire up the rest.
Apologize if this answer boarden the topic (just realized after finished).

IMHO:
It depends on who does the injection !-
It seems you need to/expect to have an MVC or MVP architecture to be in place, where a controller or Presenter does the job of translating the UI calls to business objects ,back and forth -
Creating concrete implementations of IDataAccess, Sending it to Address class.
So that the UI is totally unaware of who is providing the data it needs, and it provides you the expected scalability.
Thanks
Tarriq

Related

How to change persistence layer at runtime in C#

I am embarking on a new project and I need some guidance from veteran architects/design pattern gurus!
My new project needs to have a number of persistence layers whereby the client can decide at runtime where the data will be stored, for example, in house SQL database, MS Exchange or Google storage.
The functionality will essentially be the same just the storage/implementation of each will be different.
What I'm not looking for a here is how you do it just a pointer to the best patterns to use to serve my purpose whilst still providing flexibility down the road as their will be CHANGE. I am trying to avoid concrete implementations that will inevitably lead to some nasty code smells.
I know it will involve some kind of DI along the way but any pointers here would be greatly appreciated.
There is nothing special with your case really, so if you would follow standard practices with DI and use container to ease your task like SimpleInjector that will do the trick. The main point for you should be to not depend on concrete classes but on abstraction and that's where DI-container will help you organize this.
E.g. if you plan to save user you might have some IUserRepository with a method SaveUser. Then you will implement SqlUserRepository, GoogleStorageRepository, etc. The same goes for any other data access layer interface. If you just do that, you will need to configure your DI in a way where you can supply the required repository at a runtime based on your needs. Do not forget to never depend on GoogleStorageRepository, etc. directly, but only on a common interface. I would create a project for interfaces (and corresponding BI data model that DL will be aware of) and a project per each implementation as well to separate it even further.
Repository pattern is all about creating a separation between the persistence layer and the business layer.
Many examples on the web demonstrates it incorrectly by just using it as an wrapper over their data entities. That is incorrect. The design of a repository class/interface should be driven by the business requirements and not from how the first data store looks like.
Thus it's a perfect pattern for your use case. You define an repository interface from the business layer perspective and then create an implementation for each data store like MSSQL. I even put that interface in my business layer to further demonstrate that perspective.

Too much dependency injection for web app and web service layer?

We have a website application and a web services layer. The website must use the web service for data per our requirements. Both layers are being build by me. I'm curious if my layout for DI is justified.
Common entities project - shared by both the website and web service layer
ASP.NET MVC website
-- MVC service layer - injects concrete class to contain business rules for the website/pages
-- -- Data repository - injected by MVC service layer. In this case, various wrappers for web service calls currently, but things may change
Web services
-- Service/business layer - injected classes to handle business logic for services
-- -- Data repository - injected data classes used by business layer above. Can be ADO or EF.
So, any call can be up to 4 layers of injection. I see benefit in this, as each part can be isolated and tested. In some cases, the business layers can just "pass through" to the data layer (e.g., GetClientByID(int id) has little logic), but are there in case rules change. I feel the injections for the data layer abstract away any auto-generated entities from the web services. Luckily, in my case currently, it shares a common project, but who knows if this stays that way.
Is this too much? Thanks.
Your question is highly subjective, but I will try to answer it anyway.
You are breaking your application into logical tiers that can each be unit tested. This is generally a good thing. Different abstraction points or interfaces mean that you can easily swap the logic if you need to down the road, which makes the application more extensible and maintainable.
Is it too much?
That really depends on several factors - deadlines, budget, the life expectancy of the project, etc. The more layers you need to build, the bigger the upfront investment and the more layers you need to maintain later. But the abstractions make it easier to swap business logic - which often makes it worth the upfront cost.
But is it too much for your DI container to handle? No. Keep in mind the purpose of DI is to simplify application complexity. The more layers and services you have, the more it makes sense to use DI. DI makes it so your application layers and services don't depend directly on each other so they can be replaced without too much effort. Also, if you use a Composition Root pattern and use constructor injection, DI generally has very low overhead so performance is almost never a factor.

Can I register all IoC container components in single layer, or each layer where used?

I'm using the Unity IoC framework and have a Bootstrapper.cs class in my host MVC layer to register all components. However in my architecture I have a 'services' layer below the MVC layer, that too uses DI and there are repository interfaces injected into it (repository interfaces are not used in the MVC layer - it has the services layer Interface injected into its Controllers).
So my question is the following: can I still register the repository interface to it's concrete type in the MVC/UI layer for the entire app, or do I add another reference to Unity and create another Bootstrapper.cs class in my 'services' layer to define Interface types for that that specific layer uses?
Even if the answer is I can register the Interface in the UI layer, I'd still like to know the common practice too. The thing I don't like about registering that type in the MVC/UI layer is I would have to add a reference to the Repository layer just to make the registration, even know it is not used in that layer. It's used in the services layer.
Thanks!
Each application should have its own Composition Root, the place where you configure the application (see this answer for details).
It depends on the context, but generally speaking, if you split your container configuration among the layers you are going to make decisions about the configuration of your layers too close to the layers and you'are likely to lose the general view.
For example, in one of your business logic layers you'are registering a service:
container.RegisterType<ISercice1, MyImplementation1>(new PerThreadLifetime())
But when using that layer in a web application you could decide that a PerSession or PerRequest lifetime would be better lifetimes. This decisions should be in only one place and not spread through the layers.
I turn your question on its head.
If you add a reference to Unity in your class libraries, you would have added dependencies to the framework you are using. That is quite the opposite of what you are trying to achieve.
The only adaptation your classes should need is to support constructors or using public properties - on interfaces. That's it!
So your application entry point should do all the 'bootstrapping'.
Note that a entry point could be different applications, as well as different test projects. They could have different configurations and mocking scenarios.
If your bootstrap.cs gets large, you could split it up into smaller parts for readability reasons. But I reject the idea of classes having any knowledge about the fact that they are being bootstrapped/moqed/injected and by what.
Consider re-use. Your current libraries is using Unity. They may be used in a project using StructureMap. Or why not Ninject.
In short, yes it is possible to keep the configuration at the top of the process or localized to each module. However, all dependencies must be resolved for the entire object graph in the process.
Localizing the configuration by keeping it in each module (assembly) is often a good idea because you are allowing your service layer to take responsibility for its own configuration. My answer to this question, IMHO, is a good practice.
Yes, application should have one composition root at entry point. But it can be a good practice to keep registrations of a classes inside a layer where they are implemented. Then pull these registrations from layers at composition root, registering implementations layer by layer. This is why:
Registration within layer can be redefined in other place, for
example at entry point. Most of IoC libraries work in such a way
that registration done later erases the registration done earlier.
So registration within layer defines just a default behavior which
can be easily overridden.
You don't need to reference IoC library in all your projects\layers, even if you have registrations defined inside these
layers. A very simple set of wrapper classes will allow you to
abstract away from IoC specifics anywhere except your entry point.
When your application has several entry points, reusable registration will greatly help to prevent repeating the same
registration. This copy\paste is always bad. And applications have
several entry points quite often. For example, consider the scenario
of cross-platform application having a separate entry point for
every platform it targets. Or business logic reused in web site and
in background process.
With reusable registration, you can build a very effective testing system. You will be able to run a whole layer from tests,
mock whole layers in automated way, and do it very effectively,
minimizing efforts on writing tests.
See my blog article illustrating these points in more detail, with a working sample.

Linq To SQL, WebServices, Websites - Planning it all

Several "parts" (a WinForms app for exmaple) of my project use a DAL that I coded based on L2SQL.
I'd like to throw in several WebApps into the mix, but the issue is that the DAL "offers" much more data than the WebApps need. Way more.
Would it be OK if I wrapped the data that the websites need within a web-service, and instead of the website connecting directly to the DAL it would go through the web-service which in turn would access the DAL?
I feel like that would add a lot of overhead, but on the other hand, I definitely don't like the feeling of knowing that the WebApps have the "capabilities" of accessing much more data than they actually need.
Any input would be greatly appreciated.
Thank you very much for the help.
You can either create web services, or add a repository layer that presents only the data that your applications require. A repository has the additional benefit of being a decoupling layer, making it easier to unit test your application (by providing a mock repository).
If you plan on eventually creating different frontends (say, a web UI and a WPF or Silverlight UI), then web services make a lot of sense, since they provide a common data foundation to build on, and can be accessed across tiers.
If your data access layer were pulling all data as IQueryable, then you would be able to query your DAL and drill down your db calls with more precision.
See the very brief blog entry I wrote on Repository and Service layers using Linq to SQL. My article is built around MVC but the concept of Repository and Service layers would work just fine with WebForms, WinForms, Web Services, etc.
Again, the key here is to have your Repository or your Dal return an object AsQueryable whereby you wait until the last possible moment to actually commit to requesting data.
Your structure would look something like this
Domain Layer
Repository Layer (IQueryable)
Service layer for Web App
Website
Service layer for Desktop App
Desktop App
Service layer for Web Services
Web Service
Inside your Service layer is where you customize the specific calls based on the application your developing for. This allows for greater security and configuration on a per-app basis while maintaining a complete repository that doesn't need to be modified until you swap out your ORM (if you ever decide you need to swap out your ORM)
There is nothing inherently wrong with having more than you need in this case. The entire .NET 4 Client Profile contains over 50MB of assemblies, classes, etc. I might use 5% of it in my entire career. That doesn't mean I don't appreciate having all of it available in case I need it.
If you plan to provide the DAL to developers that should not have access to portions of the data, write a wrapper or derive a new DAL. I would avoid the services route unless you're confident you can accommodate for the overhead.
Sounds like you are on the right track. If many applications are going to use the this data you gain a few advantages by having services with DTOs.
If the domain model changes, just the mapping to the DTO needs to change. You can isolate the consuming application from these changes.
Less data over the wire
You can isolate you applications from the implementation of the DAL.
You can expose different services (maybe different DTOs) for different applications if it is necessary to restrict what parts of the object model should be exposed.

SOA with WCF responsibilities and dependencies

I am moving onto a new team that has implemented a solution using SOA with WCF. The services are all very vertical, for example: a CustomerService, an AddressService, an AccountService, etc. To return the fully populated objects the services may call another service over a wcf endpoint.
There are a few very high level vertical areas, but underneath they can reuse a lot of the core service logic.
How valid is the following new architecture:
The webservices are thin layers that handle remote calls; they are strictly for communication. The real functionality would be implemented in something lets call, "business or domain services".
Domain Service responsibilities:
Reference data access / repository interfaces for working with the infrastructure
Call multiple repository methods to create fully populated objects
Process data against the complex business rules
Call other domain services (not having to call WCF)
This would give us domain services that can be tested outside of specific WCF and SQL Server implementations.
The web services reusing the different business services seems to be the biggest gain and yet the biggest potential pitfall.
On one hand the logic can be reused for multiple services, eliminating web service calling web service calling web service.
On the other hand, if someone changes one of the assemblies multiple services need to be updated, potentially breaking multiple applications.
Have people tried this and had success? Are there better approaches?
At first blush, it sounds like the design you've walked into might be an SOA antipattern identified in this article: a group of 'chatty services,' a term the authors use to describe a situation in which ...
developers realize a service by implementing a
number of Web services where each
communicates a tiny piece of data.
Another flavor of the same antipattern
is when the implementation of a
service ends up in a chatty dialog
communicating tiny pieces of
information rather than composing the
data in a comprehensive document-like
form.
The authors continue:
Degradation in performance and costly
development are the major consequences
of this antipattern. Additionally,
consumers have to expend extra effort
to aggregate these too finely grained
services to realize any benefit as
well as have the knowledge of how to
use these services together.
That can be a valid approach. The pitfall about updating multiple services depends on how closely related the services are. Do you have a use case where if Customer Service is updated and Address Service is not the clients can still work or is it more common that all services are used by the same client and hence should be updated together. Remember the service only changes if the WSDL changes and not implementation. If you manage not to change the DataContracts and OperationContracts of the front end services there is no worries.
One approach you may investigate is using in-proc WCF services for your domain services. Alternately the front end webservices can use domain managers/engines in separate;y layered assemblies which in turn uses repositories. You can have a coarse grain of webservice class implementations and fine grained managers for domain entities that are mocakble and unit testable.

Categories