Suggest your thoughts on this design - c#

I am working on a business layer functionality.
There are two layers :-
1) Entity Data Model Layer which is the logical layer
This logical layer maps to a storage layer.
There are different classes in each of this layer.
The data storage layer has subject, test and result as objects
The logical layer has entity as objects.
I am writing a code which accepts an object of logical layer and converts it to storage layer objects.
It does so by reading the schema of the logical layer.
The schema of the logical layer is stored in an xml file and it has attributes which map to physical layer.
My code will interpret this schema and convert to appropriate physical layer objects.
The class which I have written is as follows :-
DataModelToStorageTranslator
{
IStorageLayerObject TranslateToStorageLayer(ObjectOfLogicalLayer);
}
The different classes of the storage layer are derived from the IStorageLayerObject.
The client will check the type of object at runtime.
Is there a better way to achieve the same ?
My implementation is in C#
Thanks,
Vivek

Unless there is a very good reason for using XML, I would avoid it. If you have a fixed number of objects that will need conversion from logical layer to storage layer I would suggest to create a DataModelToStorageTranslator facade like this:
DataModelToStorageTranslator{
SubjectStore translate(SubjectLogical subject);
TestStore translate(TestLogical test);
ResultStore translate(ResultLogical result);
}
That would give you more type safety as you will not need to check object types and cast. You will need to extend your interface anytime you want to add new objects (which I think is a good way of doing it for smaller projects).

To just copy Properties from the business layer to the DTO you can have a simple reflection based copy algorithm. There are ready-to-go implementation. One suitable should be AutoMapper. There are similar tools around.
Do not map over XML, that is just time consuming and will slow the reactioness of your application.
If you worry about type safety and speed issues in the conversion process consider using automatically created code using T4.
Just my thoughts.

I did something similar to this but used extensions methods on the objects, i just found that way a but more fluent and easy to follow, so i'd have calls like this.
ClientEntity.ToSaveableEntity()
which would return an SaveableEntity object.

Related

Pattern Help: Passing Object from DAL to Contract. Two Classes, One Interface

I have many projects in my solution representing the different layers of the application. The Data Access Layer (DAL) has a model of the database in it and more importantly --for my issue-- a Plain Old Class Object (POCO). I want to send an instance of this POCO to an external requester via a WCF contract. As you know, I must define the Operations Contract and Data Contract at the contract layer. It is here were my problem lies, how do I declare the data contract and its data members when the POCO is situated in another layer?
I have tried defining an interface and have both classes implement it, but I come up against a problem when I am getting the objects from the database and then passing them through the contract, the contract does not know the object being passed to it - even though it shares an interface.
Anyway, hope that is clear (as mud!), and if anyone can advise me on a suitable solution I would be much obliged.
P.S. Using C# in VS2015
Looks to me like what you need is another class specifically built for the WCF layer that contains all the properties and attributes you need to use and then use something like AutoMapper to copy to contents across to your WCF object.
Making use of the Factory Design Pattern could also be of helper here.

Enterprise Design Pattern Question

Something on my mind about structuring a system at a high level.
Let's say you have a system with the following layers:
UI
Service Layer
Domain Model
Data Access
The service layer is used to populate a graph of objects in the domain model. In an attempt to avoid coupling, the domain model will be not be persistence aware and will not have any dependencies on any data access layer.
However, using this approach how would one object in the domain model be able to call other objects without being able to load them with persistence, thus coupling everything together - which I'd be trying to avoid.
e.g. an Order Object would need to check an Inventory object and would obviously need to tell the Inventory object to load in some way, or populate it somehow.
Any thoughts?
You could inject any dependencies from the service layer, including populated object graphs.
I would also add that a repository can be a dependency - if you have declared an interface for the repository, you can code to it without adding any coupling.
One way of doing this is to have a mapping layer between the Data Layer and the domain model.
Have a look at the mapping, repository and facade patterns.
The basic idea is that on one side you have data access objects and on the other you have domain objects.
To decouple you have to: "Program to an 'interface', not an 'implementation'." (Gang of Four 1995:18)
Here are some links on the subject:
Gamma interview on patterns
Random blog article
Googling for "Program to an interface, not an implementation" will yield many useful resources.
Have the domain model layer define interfaces for the methods you'll need to call, and POCOs for the objects that need to be returned by those methods. The data layer can then implement those interfaces by pulling data out of your data store and mapping it into the domain model POCOs.
Any domain-level class that requires a particular data-access service can just depend on the interface via constructor arguments. Then you can leverage a dependency-injection framework to build the dependency graph and provide the correct implementations of your interfaces wherever they are required.
Before writing tons of code in order to separate everything you might want to ask yourself a few questions:
Is the Domain Model truly separate from the DAL? And yes, I'm serious and you should think about this because it is exceedingly rare for an RDBMS to actually be swapped out in favor of a different one for an existing project. Quite frankly it is much more common for the language the app was written in to be replaced than the database itself.
What exactly is this separation buying you? And, just as important, what are you losing? Separation of Concerns (SoC) is a nice term that is thrown about quite a bit. However, most people rarely understand why they are Concerned with the Separation to begin with.
I bring these up because more often than not applications can benefit from a tighter coupling to the underlying data model. Never mind that most ORM's almost enforce a tight coupling due to the nature of code generation. I've seen lot's of supposedly SoC projects come to a crash during testing because someone added a field to a table and the DAL wasn't regenerated... This kind of defeats the purpose, IMHO...
Another factor is where should the business logic live? No doubt there are strong arguments in favor of putting large swaths of BL in the actual database itself. At the same time there are cases where the BL needs to live in or very near your domain classes. With BL spread in such a way, can you truly separate these two items anyway? Even those who hate the idea of putting BL in a database will fall back on using identity keys and letting the DB enforce referential integrity, which is also business logic..
Without knowing more, I would suggest you consider flattening the Data Access and Domain Model layers. You could move to a "provider" or "factory" type architecture in which the service layer itself doesn't care about the underlying access, but the factory handles it all. Just some radical food for thought.
You should take a look at Martin Fowler's Repository and UnitOfWork patterns to use interfaces in your system
Until now I have seen that application can be well layered into three layers: Presentation-->Logic-->Data--and Entities (or Bussines Object). In the Logic Layer case you can use some pattern such as Transaction Script or Domain Model I'm supposing you're using this last. The domain model can use a Data Mapper for interacting with the data layer and create business objects, but you can also use a Table Module pattern.
All this patterns are described in Marttin's Fowler Patterns of Enterprise Application Architecture book. Personally I use Transaction Script because it is simplest than Domanin Model.
One solution is to make your Data Access layer subclass your domain entities (using Castle DynamicProxy, for example) and inject itself into the derived instances that it returns.
That way, your domain entity classes remain persistence-ignorant while the instances your applications use can still hit databases to lazy-load secondary data.
Having said that, this approach typically requires you to make a few concessions to your ORM's architecture, like marking certain methods virtual, adding otherwise unnecessary default constructors, etc..
Moreover, it's often unnecessary - especially for line-of-business applications that don't have onerous performance requirements, you can consider eagerly loading all the relevant data: just bring the inventory items up with the order.
I felt this was different enough from my previous answer, so here's a new one.
Another approach is to leverage the concept of Inversion of Control (IoC). Build an Interface that your Data Access layer implements. Each of the DAL methods should take a list of parameters and return a Data Table.
The service layer would instantiate the DAL through the interface and pass that reference to your Domain Model. The domain model would then make it's own calls into the DAL, using the interface methods, and decide when it needs to load child objects or whatever.
Something like:
interface IDBModel {
DataTable LoadUser(Int32 userId);
}
class MyDbModel : IDBModel {
DataTable LoadUser(Int32 userId) {
// make the appropriate DB calls here, return a data table
}
}
class User {
public User(IDBModel dbModel, Int32 userId) {
DataTable data = dbModel.LoadUser(userId);
// assign properties.. load any additional data as necessary
}
// You can do cool things like call User.Save()
// and have the object validate and save itself to the passed in
// datamodel. Makes for simpler coding.
}
class MyServiceLayer {
public User GetUser(Int32 userId) {
IDBModel model = new MyDbModel();
return new User(model, userId);
}
}
With this mechanism, you can actually swap out your db models on demand. For example, if you decide to support multiple databases then you can have code that is specific to a particular database vendors way of doing things and just have the service layer pick which one to use.
The domain objects themselves are responsible for loading their own data and you can keep any necessary business logic within the domain model. Another point is that the Domain Model doesn't have a direct dependency on the data layer, which preserves your mocking ability for independent testing of business logic.
Further, the DAL has no knowledge of the domain objects, so you can swap those out as necessary or even just test the DAL independently.

C# Data Layer and Dto's

I have recently joined a company that using typed datasets as their 'Dto'. I think they are really rubbish and want to change it to something a little more modern and user friendly. So, I am trying to update the code so that the data layer is more generic, i.e. using interfaces etc, the other guy does not know what a Dto is and we are having a slight disagreement about how it should be done.
Without trying to sway people to my way of thinking, I would like to get impartial answers from you people as to what layers the Dto can be present in. All layers; DAL, BL and Presentation or a small sub set within these layers only.
Also, whether IList objects should or should not be present in the DAL.
Thanks.
It really depends on your architecture.
For the most point you should try to code to interfaces then it doesn't really matter what your implementation is. If you return ISomething it could be your SomethingEntity or your SomethingDTO but your consuming code doesn't care less as long as it implements the interface.
You should be returning an IList/ICollection/IEnumerable over a concrete collection or array.
Properties should not return arrays
Do not expose generic lists
What you should try to do first is separate your code and make it loosely coupled by inserting some interfaces between your layers such as a repository for your DataAccess layer. Your repository then returns your entities encapsulated by an interface. This will make your code more testable and allow you to mock more easily. Once you have your tests in place you can then start to change the implementations with less risk.
If you do start to use interfaces I would suggest integrating an IoC such as Windsor sooner rather than later. If you do it from the get go it will make things easier later on.
One thing is DataSets are poor to achieve interoperability. Even typed datasets are also not so compatible when it comes to consuming typed datasets from a non .net client. Refer this link. If you have to achieve interoperability then fight hard for DTOs otherwise try to make your team understand DTOs over a period of time because datasets are not so bad after all.
On part of interfaces, yes you should be exposing interfaces. For example - If you are returning List<T> from DAL, instead you should return IList<T>. Some people go to extent of returning only IEnumerable<T> because all you need is capability to enumerate. But then while doing it don't become astronaut architect.
In my applications I have figured out that returning IList<T> instead of List<T> pollutes my code base with codes like this:
//consider personCollection as IList<Person>
(personCollection as List<Person>).ForEach(//Do Something)
So I personally try to maintain a balance between returning interface or concrete object. If you ask me what I am doing right now, then I will tell you I am returning List<T>. I am influenced to not to become astronaut architect.
I always use DTOs, never DataTable. But I only use them to transfer from BL to DL and the other way around. My presentation layers are often only aware of the business and service layers in case of service oriented.
The benefits I can see to use DTOs rather than datatables:
easy refactoring
easy diagram production
cleaner more readable code, especially in the DAL's unit tests
By definition a DTO is a data transfer object, used to (wait for it) transfer data from one layer to another.
DTOs can be used across all layers and I have used them well with web services.

Generic business layer design with repository pattern with c# (how)

I am using a repository design with web applications (repository (data layer) exposing model (objects) to the business layer which is then consumed to the data layer (ui). Objects or lists of objects are passed between the layers with this type of implementation.
I am finding my business layer is a becoming a series of manager type classes which all have common GetAll, GetById, Save, Delete type methods. This is very common with a number of very small simple objects. This is the area of concern or opportunity for improvement (the series of smaller business manager classes). I am looking for options to avoid the whole series of smaller business manager classes mapping to the smaller objects which only do get/save/delete object.
The bigger objects which are closer to the functionality to the application have a number of methods in addition to the get/save/delete type methods (these manager classes are ok).
I am thinking there is a design pattern or implementation which will allow me to have one manager class which resides in the business layer which would accept an object as a parameter of a particular object type and the get/save/delete methods respectively know the type of repository object to spin up and pass the object to it for its operation.
The benefit here would be that I can have one generic manager class to pass save/delete/get's for smaller type objects to the appropriate repository class thereby reducing the many smaller manager classes.
Ideas on how to accomplish this?
thx
I would not go that way. The business layer classes can be as simple as code that forwards to the data layer, and it is true that they can be annoying to write, but they exist for a couple of reasons: validation, security, taking some actions based on business rules.
If you try to make a generic business layer, it will hard to include all the various things that a business class could do. The generic business layer will become much more complex than the one you currently have. Testing will be much harder. Adding a new business rule will be hard, too.
Sorry, this is not what you wanted to read, but I have already gone the route of generic systems and have always had lots of regrets.
The idea behind a repository (or a dao), is to further abstract data access concerns away from the business layer in order to simplify that layers focus on the "business" of a given domain.
That said, there are many common plumbing type of concerns that are reuseable across different applications, some of which do lend themselves to a supertype in the business layer. Consider the cross cutting concern of being able to retrieve a given business entity by some Id from a database, and you might come to the conclusion that it is in fact useful to have an Id property in a business layer supertype. It might even be useful if entities considered that Id when determining equality. Etc..
Now I do believe that Timores is right in principal and trying to write one application that fits all domains is both incredibly painful and totally fruitless, but I also beleive the the art of the profession is knowing how to use a variety of tools and when to apply which one, and having some core infrastructure code should be in your tool belt.
For a good idea of a framework concept for a web app that has been road tested, take a look at SharpArch.
HTH,
Berryl

.NET Entity framework project layout (architecture)

I'm trying to determine how best to architect a .NET Entity Framework project to achieve a nice layered approach. So far I've tried it out in a browse-based game where the players own and operate planets. Here's how I've got it:
Web Site
This contains all the front end.
C# Project - MLS.Game.Data
This contains the EDMX file with all my data mappings. Not much else here.
C# Project - MLS.Game.Business
This contains various classes that I call 'Managers' such as PlanetManager.cs. The planet manager has various static methods that are used to interact with the planet, such as getPlanet(int planetID) which would return an generated code object from MLS.Game.Data.
From the website, I'll do something like this:
var planet = PlanetManager.getPlanet(1);
It returns a Planet object from from the MLS.Game.Data (generated from the EDMX). It works, but it bothers me to a degree because it means that my front end has to reference MLS.Game.Data. I've always felt that the GUI should only need to reference the Business project though.
In addition, I've found that my Manager classes tend to get very heavy. I'll end up with dozens of static methods in them.
So... my question is - how does everyone else lay out their ASP EF projects?
EDIT
After some more though, there's additional items which bother me. For example, let's say I have my Planet object, which again is generated code from the wizard. What if a time came that my Planet needed to have a specialized property, say "Population" which is a calculation of some sort based on other properties of the Planet object. Would I want to create a new class that inherits from Planet and then return that instead? (hmm, I wonder if those classes are sealed by the EF?)
Thanks
You could try the following to improve things:
Use EF to fetch DTOs in your Data layer, then use these DTOs to populate richer business objects in your Business layer. Your UI would only then need to reference the Business layer.
Once you have created the rich business objects, you can begin to internalise some of the logic from the manager classes, effectively cleaning up the business layer.
I personally prefer the richer model over the manager model because, as you say, you end up with a load of static methods, which you inevitibly end up chaining together inside other static methods. I find this both too messy and, more importantly, harder to understand and guarantee the consistency of your objects at any given point in time.
If you encapsulate the logic within the class itself you can be more certain of the state of your object regardless of the nature of the external caller.
A good question by the way.
IMHO, your current layout is fine. It's perfectly normal for your UI to reference the 'Data' layer as you are calling it. I think that perhaps your concern is arising due to the terminology. The 'Data' you have described more often referred to as a 'business objects' (BOL) layer. A common layout would then be to have a business logic layer (BLL) which is your 'Managers' layer and a data access layer (DAL). In your scenario, LINQ to Entites (presuming you will use that) is your DAL. A normal reference pattern would then be:-
UI references BLL and BOL.
BLL refences BOL and DAL (LINQ to Entites).
Have a look at this series of articles for more detail.
As for your second question (after the EDIT) if you need or want to add features to your EF objects you can use partial classes. Right click the EDMX file and select view code.
Or if this isn't enough for you can ditch the designer and write your own EF enabled classes.
There is a (brief) discussion of both options here -
http://msdn.microsoft.com/en-us/library/bb738612.aspx
As for your second question in the "EDIT" section:
If I'm not mistaken, the classes generated by EF are not sealed, and they are PARTIAL classes, so you could easily extend those without touching the generated files themselves.
The generated class will be:
public partial class Planet : global::System.Data.Objects.DataClasses.EntityObject
{
...
}
so you can easily create your own "PlanetAddons.cs" (or whatever you want to call it) to extend this class:
public partial class Planet
{
property int Population {get; set;}
...
}
Pretty neat, eh? No need to derive and create artificial object hierarchies....
Marc
I'm no expert, but that sounds pretty good to me. That similar to what I have in my solutions, except I just merge the EF project with the business project. My solutions aren't that big, and my objects don't require a lot of intelligence, so its fine for me. I too have a ton of different methods for each of my static helper classes.
If you don't want the presentation layer knowing about the data access layer, then you have to create some intermediary classes, which would probably be a lot of work. So whats the problem with your current setup?
Your layout looks ok.
I would have added a Utility/Common layer
Web UI
Business Layer
Dataobjects
Utilities layer
I would add DTOs to your Business layer that are "dumb object" representations (i.e. only properties) of the Entities in your data layer. Then your "Manager" classes can return them, such as:
class PlanetManager
{
public static PlanetDTO GetPlanet(int id) { // ... }
}
and your UI can only deal with the BLL layer via POCOs; the Manager (what I would call a "Mapper" class) handles all the translating between the objects and the data layer. Also if you need to extend the class, you can have a "virtual" property on the DTO object and have the Manager translate that back to its components.

Categories