I've begun to notice something of an anti-pattern in my ASP.NET development. It bothers me because it feels like the right thing to do to maintain good design, but at the same time it smells wrong.
The problem is this: we have a multi-layered application, the bottom layer is a class handling calls to a service that provides us with data. Above that is a layer of classes that possible transform, manipulate, and check the data. Above that are the ASP.NET pages.
In many cases, the methods from the the service layer don't need any changes before going on the view, so the model is just a straight pass through, like:
public List<IData> GetData(int id, string filter, bool check)
{
return DataService.GetData(id, filter, check);
}
It's not wrong, nor necessarily awful to work on, but it creates an odd kind of copy/paste dependency. I'm also working on the underlying service, and it also replicates this patter a lot, and there are interfaces throughout. So what happens is, "I need to add int someotherID to GetData" So I add it to the model, the service caller, the service itself, and the interfaces. It doesn't help that GetData is actually representative of several methods that all use the same signature but return different information. The interfaces help a bit with that repetition, but it still crops up here and there.
Is there a name for this anti-pattern? Is there a fix, or is a major change to the architecture the only real way? It sounds like I need to flatten my object model, but sometimes the data layer is doing transformations so it has value. I also like keeping my code separated between "calls an outside service" and "supplies page data."
I would suggest you use the query object pattern to resolve this. Basically, your service could have a signature like:
IEnumerable<IData> GetData(IQuery<IData> query);
Inside the IQuery interface, you could have a method that takes a unit of work as input, for example a transaction context or something like ISession if you are using an ORM such as NHibernate and returns a list of IData objects.
public interface IQuery<T>
{
IEnumerable<T> DoQuery(IUnitOfWork unitOfWork);
}
This way, you can create strongly typed query objects that match your requirements, and have a clean interface for your services. This article from Ayende makes good reading about the subject.
Sounds to me like you need another interface, so that the method becomes something like:
public List<IData> GetData(IDataRequest request)
You're delegating to another layer, and it's not necessarily a bad thing at all.
You could add some other logic here or in another method down the line, that belongs only in this layer, or swap out to having the layer delegated-to with another implementation, so it certainly could be perfectly good use of the layers in question.
You may have too many layers, but I wouldn't say so just from seeing this, more from not seeing anything else.
From what you've described it simply sounds like you have encountered one of the 'trade-offs' of abstraction in your application.
Consider the case where those 'call-chains' no longer 'pass-thru' the data but require some tranformation. It might not be needed now and certainly the case can be made for YAGNI.
However, in this case it doesn't seem like too much tech debt to handle with the positive side effect of being able to easily introduce changes to the data between layers.
I use this pattern as well. However I used it for the purpose of de-coupling my domain model objects from my data objects.
In my case, instead of "passing through" the object coming from the data layer as you do in your example, I "map" it to another object that lives in my domain layer. I use AutoMapper to take out the pain of manually doing it.
In most cases my domain object looks exactly the same as my data object that it originated from. However there are times when I need to flatten information coming from my data object... or I may not be interested in everything that is in my data object etc.. I map the data object to a customized domain object that only holds the fields my domain layer is interested in.
Also this has the side effect that when I decide to re factor or change my data-layer for something else, It does not have to affect my domain objects since they are de-coupled using the mapping technique.
Here is a description of auto-mapper, which is sort of what this design pattern tries to achieve I think:
AutoMapper is geared towards model projection scenarios to flatten complex object models to DTOs and other simple objects, whose design is better suited for serialization, communication, messaging, or simply an anti-corruption layer between the domain and application layer
Actually, the way you have chosen to go, is the reason of having what you have (I am not saying it is bad).
First, let me say your approach is quite normal.
Now, let me go thought your layers:
Your service - provides somewhat kind of strongly-typed access model. What that means is it has some types of arguments, used them in some special types of methods which return again some special type of results.
Your service-access-layer - also provides the same kind of model. So that it takes special kinds of arguments for special kinds of methods, returning special kinds of results.
etc...
In order not to confuse, here is what I call special kind:
public UserEntity GetUserByID(int userEntityID);
In this example you need to pass exactly the Int, while calling exactly the GetUserByID and it will return exactly the UserEntity object.
Now another kind of approach:
Remember how SqlDataReader works? not very strongly-typed, right?
What you call here for, in my opinion, is that you are missing some not-strongly typed layer.
For that to happen: you need to switch from strongly-typed to non-strongly typed somewhere in your layers.
Example:
public Entity SelectByID(IEntityID id);
public Entity SelectAll();
So, if you had something like this instead of the service access layer, then you could call it for whichever arguments you wanted.
But, that is almost creating an ORM of your own, so I would not think this is the best way to go.
It's essential to define what kind of responsibility goes to which layer, and place such logic only in the layer it belongs to.
It's absolutely normal to just pass through, if you don't have to add any logic in particular method. At some time you might need to do so, and abstraction layer will pay off at that point.
It's even better to have parallel hierarchies, not just passing the underlying layer's objects up, so each layer uses it's own class hierarchy, and you can employ something like AutoMapper in case you feel there's no much difference in the hierarches. This gives you flexibility, and you can always replace automapping with custom mapping code in particular methods/classes, in case hierarchies do not match anymore.
If you many methods with almost the same signature, then you should think of Query Specification pattern.
IData GetData(IQuery<IData> query)
Then, in presentation layer you can implement a databinder for your custom query specification objects, where a single aspnet handler could implement creation of specific query objects, and passing them to a single service method, which will pass it to a single repository method, where it can be dispatched according to a specific query class, possibly with a Visitor pattern.
IQuery<IData> BindRequest(IHttpRequest request)
With this to Automapping and Query Specification pattern, you can reduce duplication to a minimum.
Related
Good morning, guys, I need some advice.
I’m creating the output templates for my APIs but I’m very confused about the name of the classes.
For example, I have an entity called User.
In the output model I must return a list of User but it must not center with the entity but with another model created by me for output.
Well, I don’t know what to name this last class I told you about.
I cannot call it User because it conflicts with the real entity.
Tips?
There should be at least 3 layers of objects in your code, ViewModel, Dto and Entities.
Each layer should only be able to see the layer directly below it.
So, your service layer, can read Entities from your data layer, but if it exposes any objects, they should be in a Dto.
Then, your presentation layer (UI/API etc), will read from the service layer (DTO), and expose it's objects as ViewModels.
In many cases, this means that all 3 objects (Entity, Dto & ViewModel) have the same repeated properties, but this is to be expected, especially in smaller or newer projects.
This should then solve your naming problems.
Data layer: XXXEntity
Service layer: XXXDto
Presentation layer: XXXViewModel
This explanation is very simplified, and you could solve this problem in many different ways (you could use namespaces instead of class suffixes for example).
The guidance I try to work by is that naming should reflect intent - whereas a User just represents the concept of a 'user' in your domain, an API model is intended to be an API payload containing user information. In your position, I would consider something like UserApiModel or UserPayload.
*As an added note, for my money, the thing that matters the most here is consistency - no matter what you pick, what makes sense to you now may not be the most intuitive thing to you (or anyone else) maintaining the code later. As long as you apply your naming convention consistently across all your API models, don't stress too much about finding the 'right' one - just pick the first one that seems good enough, and keep rolling.
There is this generic repository implementation
http://www.itworld.com/development/409087/generic-repository-net-entity-framework-6-async-operations
By the looks of it , it seems that i can just have a single generic repository for my whole project and for almost all of the entities in the database it will work fine. For the ones that it doesn't , i can create a more specific repository , e.g. MembershipRepository which derives from the base repository and overrides the methods as needed, such as Find for example.
Now one could also write a generic service class too.... similar to the above, and then creating only a few more specific services.
That will drastically reduce the project size. No need to write redundant repositories per entity, and a much smaller number of service layer classes.
Surely it can't be that simple. Is there a catch to this? Let's ignore for a moment that EntityFramework has the repository+UOW pattern built in and repository pattern isn't needed.
We do.
I am torn about it honestly. For smaller domains its perfectly fine and works a treat. For larger ones (like the one I am working with currently), your repository can never really be generic enough to warrant a single one.
For example, the generic repository in the code base that I currently work with is now littered with all sorts of very specific methods for things like eager fetching, paging, etc. Its much more than what it started out as. Looking back at the revision history, it once only had GetAll, GetById, Create and Update methods. Now it has things like GetAllEagerFetch with overloads for various JOIN types, GetAllPaged, GetAllPagedEagerFetch, DeleteById, ExecuteStoredProcedure, ExecuteSql (yuck), etc. There is a lot more.
One way around this is to perhaps follow the Interface Segregation Principle so that your repository can be huge and generic but consumers only care about what they need to care about. I don't particularly like that though.
That being said - we have moved away from a Repository-style setup in more recent projects. We prefer a CQRS setup now with Command and Query objects that have a specific purpose. This leans more towards the Single Responsibility Principle instead (doesn't follow it to the "Uncle Bob degree".. but the classes have some well defined responsibilities).
I think I've hit that "paralysis by analysis" state.
I have an MVC app, using EF as an ORM.
So I'm trying to decide on the best data access pattern, and so far I'm thinking putting all data access logic into controllers is the way to go.. but it kinda doesn't sound right.
Another option is creating an external repository, handling data interactions.
Here's my pros/cons:
If embedding data access to controllers, I will end up with code like this:
using (DbContext db = new DbContext())
{
User user = db.Users.Where(x=>x.Name == "Bob").Single();
user.Address.Street = "some st";
db.SaveChanges();
}
So with this, I get full benefits of lazy loading, I close connection right after I'm done, I'm flexible on where clause - all the niceties.
The con - I'm mixing a bunch of stuff in a single method - data checking, data access, UI interactions.
With Repository, I'm externalizing data access, and in theory can just replace repos if I decide to use ado.net or go with different database.
But, I don't see a good clean way to realize lazy loading, and how to control DbContext/connection life time.
Say, I have IRepository interface with CRUD methods, how would I load a List of addresses that belong to a given user ? Making methods like GetAddressListByUserId looks ugly, wrong,
and will make me to create a bunch of methods that are just as ugly, and make little sense when using ORM.
I'm sure this problem been solved like million times, and hope there's a solution somewhere..
And one more question on repository pattern - how do you deal with objects that are properties ? E.g. User has a list of addresses, how would you retrieve that list ? Create a repository for the address ? With ORM the address object doesn't have to have a reference back to user, nor Id field, with repo - it will have to have all that. More code, more exposed properties..
The approach you choose depends a lot on the type of project you are going to be working with. For small projects where a Rapid Application Development (RAD) approach is required, it might almost be OK to use your EF model directly in the MVC project and have data access in the controllers, but the more the project grows, the more messy it will become and you will start running into more and more problems. In case you want good design and maintainability, there are several different approaches, but in general you can stick to the following:
Keep your controllers and Views clean. Controllers should only control the application flow and not contain data access or even business logic. Views should only be used for presentation - give it a ViewModel and it will present it as Html (no business logic or calculations). A ViewModel per view is a pretty clean way of doing it.
A typical controller action would look like:
public ActionResult UpdateCompany(CompanyViewModel model)
{
if (ModelState.IsValid)
{
Company company = SomeCompanyViewModelHelper.
MapCompanyViewModelToDomainObject(model);
companyService.UpdateCompany(company);
return RedirectToRoute(/* Wherever you go after company is updated */);
}
// Return the same view with highlighted errors
return View(model);
}
Due to the aforementioned reasons, it is good to abstract your data access (testability, ease of switching the data provider or ORM or whatever, etc.). The Repository pattern is a good choice, but here you also get a few implementation options. There's always been a lot of discussion about generic/non-generic repositories, whether or not one should return IQueryables, etc. But eventually it's for you to choose.
Btw, why do you want lazy loading? As a rule, you know exactly what data you require for a specific view, so why would you choose to fetch it in a deferred way, thus making extra database calls, instead of eager loading everything you need in one call? Personally, I think it's okay to have multiple Get methods for fetching objects with or without children. E.g.
public class CompanyRepository
{
Get(int Id);
Get(string name);
GetWithEmployees(int id);
...
}
It might seem a bit overkill and you may choose a different approach, but as long as you have a pattern you follow, maintaining the code is much easier.
Personally I do it this way:
I have an abstract Domain layer, which has methods not just CRUD, but specialized methods, for example UsersManager.Authenticate(), etc. It inside uses data access logic, or data-access layer abstraction (depending on the level of abstraction I need to have).
It is always better to have an abstract dependency at least. Here are some pros of it:
you can replace one implementation with another at a later time.
you can unit test your controller when needed.
As of controller itself, let it have 2 constructors: one with an abstract domain access class (e.g. facade of domain), and another (empty) constructor which chooses the default implementation. This way your controller lives well during web application run-time (calling empty constructor) and during the unit-testing (with mock domain layer injected).
Also, to be able to easily switch to another domain at a later time, be sure to inject the domain creator, instead of domain itself. This way, localizing the domain layer construction to the domain creator, you can switch to another implementation at any time, by just reconstructing the domain creator (by creator I mean some kind of factory).
I hope this helps.
Addition:
I would not recommend having CRUD methods in domain layer, because this will become a nightmare whenever you rich the unit-testing phase, or even more, when you need to change the implementation to the new one at a later time.
It really comes down to where you want your code. If you need to have data access for an object you can put it behind an IRepository object or in the controller doesn't matter: you will still wind up with either a series of GetByXXX calls or the equivilent code. Either way you can lazy load and control the lifetime of the connection. So now you need to ask yourself: where do I want my code to live?
Personally, I would argue to get it out of the controller. By that I mean moving it to another layer. Probably using an IRespository type of pattern where you have a series of GetByXXX calls. Sure they are ugly. Wrong? I would argue otherwise. At least they are all contained within the same logical layer together rather than being scattered throughout the controllers where they are mixed in with validation code, etc.
Something on my mind about structuring a system at a high level.
Let's say you have a system with the following layers:
UI
Service Layer
Domain Model
Data Access
The service layer is used to populate a graph of objects in the domain model. In an attempt to avoid coupling, the domain model will be not be persistence aware and will not have any dependencies on any data access layer.
However, using this approach how would one object in the domain model be able to call other objects without being able to load them with persistence, thus coupling everything together - which I'd be trying to avoid.
e.g. an Order Object would need to check an Inventory object and would obviously need to tell the Inventory object to load in some way, or populate it somehow.
Any thoughts?
You could inject any dependencies from the service layer, including populated object graphs.
I would also add that a repository can be a dependency - if you have declared an interface for the repository, you can code to it without adding any coupling.
One way of doing this is to have a mapping layer between the Data Layer and the domain model.
Have a look at the mapping, repository and facade patterns.
The basic idea is that on one side you have data access objects and on the other you have domain objects.
To decouple you have to: "Program to an 'interface', not an 'implementation'." (Gang of Four 1995:18)
Here are some links on the subject:
Gamma interview on patterns
Random blog article
Googling for "Program to an interface, not an implementation" will yield many useful resources.
Have the domain model layer define interfaces for the methods you'll need to call, and POCOs for the objects that need to be returned by those methods. The data layer can then implement those interfaces by pulling data out of your data store and mapping it into the domain model POCOs.
Any domain-level class that requires a particular data-access service can just depend on the interface via constructor arguments. Then you can leverage a dependency-injection framework to build the dependency graph and provide the correct implementations of your interfaces wherever they are required.
Before writing tons of code in order to separate everything you might want to ask yourself a few questions:
Is the Domain Model truly separate from the DAL? And yes, I'm serious and you should think about this because it is exceedingly rare for an RDBMS to actually be swapped out in favor of a different one for an existing project. Quite frankly it is much more common for the language the app was written in to be replaced than the database itself.
What exactly is this separation buying you? And, just as important, what are you losing? Separation of Concerns (SoC) is a nice term that is thrown about quite a bit. However, most people rarely understand why they are Concerned with the Separation to begin with.
I bring these up because more often than not applications can benefit from a tighter coupling to the underlying data model. Never mind that most ORM's almost enforce a tight coupling due to the nature of code generation. I've seen lot's of supposedly SoC projects come to a crash during testing because someone added a field to a table and the DAL wasn't regenerated... This kind of defeats the purpose, IMHO...
Another factor is where should the business logic live? No doubt there are strong arguments in favor of putting large swaths of BL in the actual database itself. At the same time there are cases where the BL needs to live in or very near your domain classes. With BL spread in such a way, can you truly separate these two items anyway? Even those who hate the idea of putting BL in a database will fall back on using identity keys and letting the DB enforce referential integrity, which is also business logic..
Without knowing more, I would suggest you consider flattening the Data Access and Domain Model layers. You could move to a "provider" or "factory" type architecture in which the service layer itself doesn't care about the underlying access, but the factory handles it all. Just some radical food for thought.
You should take a look at Martin Fowler's Repository and UnitOfWork patterns to use interfaces in your system
Until now I have seen that application can be well layered into three layers: Presentation-->Logic-->Data--and Entities (or Bussines Object). In the Logic Layer case you can use some pattern such as Transaction Script or Domain Model I'm supposing you're using this last. The domain model can use a Data Mapper for interacting with the data layer and create business objects, but you can also use a Table Module pattern.
All this patterns are described in Marttin's Fowler Patterns of Enterprise Application Architecture book. Personally I use Transaction Script because it is simplest than Domanin Model.
One solution is to make your Data Access layer subclass your domain entities (using Castle DynamicProxy, for example) and inject itself into the derived instances that it returns.
That way, your domain entity classes remain persistence-ignorant while the instances your applications use can still hit databases to lazy-load secondary data.
Having said that, this approach typically requires you to make a few concessions to your ORM's architecture, like marking certain methods virtual, adding otherwise unnecessary default constructors, etc..
Moreover, it's often unnecessary - especially for line-of-business applications that don't have onerous performance requirements, you can consider eagerly loading all the relevant data: just bring the inventory items up with the order.
I felt this was different enough from my previous answer, so here's a new one.
Another approach is to leverage the concept of Inversion of Control (IoC). Build an Interface that your Data Access layer implements. Each of the DAL methods should take a list of parameters and return a Data Table.
The service layer would instantiate the DAL through the interface and pass that reference to your Domain Model. The domain model would then make it's own calls into the DAL, using the interface methods, and decide when it needs to load child objects or whatever.
Something like:
interface IDBModel {
DataTable LoadUser(Int32 userId);
}
class MyDbModel : IDBModel {
DataTable LoadUser(Int32 userId) {
// make the appropriate DB calls here, return a data table
}
}
class User {
public User(IDBModel dbModel, Int32 userId) {
DataTable data = dbModel.LoadUser(userId);
// assign properties.. load any additional data as necessary
}
// You can do cool things like call User.Save()
// and have the object validate and save itself to the passed in
// datamodel. Makes for simpler coding.
}
class MyServiceLayer {
public User GetUser(Int32 userId) {
IDBModel model = new MyDbModel();
return new User(model, userId);
}
}
With this mechanism, you can actually swap out your db models on demand. For example, if you decide to support multiple databases then you can have code that is specific to a particular database vendors way of doing things and just have the service layer pick which one to use.
The domain objects themselves are responsible for loading their own data and you can keep any necessary business logic within the domain model. Another point is that the Domain Model doesn't have a direct dependency on the data layer, which preserves your mocking ability for independent testing of business logic.
Further, the DAL has no knowledge of the domain objects, so you can swap those out as necessary or even just test the DAL independently.
I have recently joined a company that using typed datasets as their 'Dto'. I think they are really rubbish and want to change it to something a little more modern and user friendly. So, I am trying to update the code so that the data layer is more generic, i.e. using interfaces etc, the other guy does not know what a Dto is and we are having a slight disagreement about how it should be done.
Without trying to sway people to my way of thinking, I would like to get impartial answers from you people as to what layers the Dto can be present in. All layers; DAL, BL and Presentation or a small sub set within these layers only.
Also, whether IList objects should or should not be present in the DAL.
Thanks.
It really depends on your architecture.
For the most point you should try to code to interfaces then it doesn't really matter what your implementation is. If you return ISomething it could be your SomethingEntity or your SomethingDTO but your consuming code doesn't care less as long as it implements the interface.
You should be returning an IList/ICollection/IEnumerable over a concrete collection or array.
Properties should not return arrays
Do not expose generic lists
What you should try to do first is separate your code and make it loosely coupled by inserting some interfaces between your layers such as a repository for your DataAccess layer. Your repository then returns your entities encapsulated by an interface. This will make your code more testable and allow you to mock more easily. Once you have your tests in place you can then start to change the implementations with less risk.
If you do start to use interfaces I would suggest integrating an IoC such as Windsor sooner rather than later. If you do it from the get go it will make things easier later on.
One thing is DataSets are poor to achieve interoperability. Even typed datasets are also not so compatible when it comes to consuming typed datasets from a non .net client. Refer this link. If you have to achieve interoperability then fight hard for DTOs otherwise try to make your team understand DTOs over a period of time because datasets are not so bad after all.
On part of interfaces, yes you should be exposing interfaces. For example - If you are returning List<T> from DAL, instead you should return IList<T>. Some people go to extent of returning only IEnumerable<T> because all you need is capability to enumerate. But then while doing it don't become astronaut architect.
In my applications I have figured out that returning IList<T> instead of List<T> pollutes my code base with codes like this:
//consider personCollection as IList<Person>
(personCollection as List<Person>).ForEach(//Do Something)
So I personally try to maintain a balance between returning interface or concrete object. If you ask me what I am doing right now, then I will tell you I am returning List<T>. I am influenced to not to become astronaut architect.
I always use DTOs, never DataTable. But I only use them to transfer from BL to DL and the other way around. My presentation layers are often only aware of the business and service layers in case of service oriented.
The benefits I can see to use DTOs rather than datatables:
easy refactoring
easy diagram production
cleaner more readable code, especially in the DAL's unit tests
By definition a DTO is a data transfer object, used to (wait for it) transfer data from one layer to another.
DTOs can be used across all layers and I have used them well with web services.