Do I have to really create multiple models? - c#

MS stack developer historically.
I have committed to retooling to the following stack
angular -> ms web.api2 -> C# business objects -> sql server
Being old, I develop the database from requirements and use Codesmith to generate the business logic layer. (yes, I have heard of entity framework. even tried it once).
As I embrace Angular and web API 2
I find that Angular wants me to write a model on the front end. This seems to be just a data structure, I cant even add helper methods to it
So I also often write a class with helper methods that takes an instance of the model. Kind of ugly,but it does marry structure and logic.
I find that Web API2 wants me to write a model. This again seems to be just a data structure. I am exploring the dynamic data type, but really this doesn't buy me much. Instead of writing a class, I'm writing a mapping function.
The question is this:
Is there any way around having 3+ copies of each class spread across the stack?
Codesmith is a very capable code generator... it can gen multiple files... but...
If its just a couple data members, and 3 places, I can copy paste edit and get it done.
Just seems to me that now committing to keeping a data structure in synch in 3 different environments is setting oneself up for a lot of work.
I have spent the last 15 years trying to shove as much code as I can into a framework of inheritable classes so I can keep things DRY.
Am I missing something? Are there any patterns that can be suggested?
[I know this isn't a question tailored for SO, but it is where all the smart people shop. Downvote me if you feel honor bound to do so.]

Not entirely familiar with how CodeSmith generates it's classes, but if they are just plain-old-CLR-objects that serialize nicely, you can have WebApi return them directly to your Angular application. There are purists that will frown upon this, but depending on the application, there may be a justification.
Then, in the world of Angular, you have a few options, again, depending on your requirements/justification, and your application - again, purists will definitely frown upon some of the options.
create classes that match what's coming down from the server (more correct method)
Treat everything as "any", lose type safety, and just access properties as you need them i.e. don't create the model. (obviously less correct method)
find a code generation tool that will explore API end points to determine what they return, and generate your typescript classes for you.
Personally, using Entity Framework, I (manually) create my POCO's for database interraction, have a "view"/DTO class that WebAPI would then send back to the client, and a definition of the object in Typescript, but I am a control freak, and don't like generated code.

Related

Accessing sql-db with EF in data-layer, how to pass data to service-layer?

I need to access some data in an existing sql-database and publish it using a REST-Service (using Webapi).
In my previous, very small project I just accessed the EF-Context directly from my controllers, create some DTO's from my EF-Entities and return it to the caller. That was simple and I got it working really fast.
Now, the project is not much bigger but I want to do it 'the right way' this time as everyone is talking about a layering architecture, even for a small project, so unit-testing etc. is much easier.
Being a newbie on this (yes, I need to read more books) I decided to start reading tons of blog-posts about architectural design of an application and so on.
First thing was to get in touch with the various techniques on accessing the data in the database using EF (I'm using v6.2, DB-First). Some say, you need a repository for each entity, some say, create a generic repository and others say, repositories are the new evil, avoid them at all cost.
Some blog-posts I've read:
generic-dal-using-entity-framework
is-the-repository-pattern-useful-with-entity-framework
repositories-on-top-unitofwork-are-not-a-good-idea
why-entity-framework-renders-the-repository-pattern-obsolete
favor-query-objects-over-repositories
and so on.
Even some others say, that separating your EF-generated POCO's should be separated from the 'pure' EF-stuff like the EDMX: splitting-entity-framework-model-classes-separate-projects
Some of the posts are old and may be obsolete but I'm just struggling on what is the best way to accomplish my task.
Right now, I have 4 Projects:
DGO.Core: Containing my DTO's
DGO.Data: Containing my EF-Stuff and 1 Repository-Class (see below for details).
DGO.Service: Referencing DGO.Data and accessing the methods exposed by the repository-class.
DGO.Webapi: Referencing all three DLL's but using the methods from the Service-Dll.
I need to reference the Data-Dll to be able to inject the data-repository.
So now my db-queries reside in the Data-DLL (in the so-called repository-class) which creates filled DTO's from my Core-DLL. These DTO's are then passed to the Service-DLL, which might process some logic here and there and then this DTO is passed to the Webapi-Controller.
Is this a common approach on passing these DTO's thru all layers?
Or is it better to split the POCO's from the EDMX and use these directly in my service-layer.
So the direction will be 'Data-Layer' -> 'POCOs' -> 'Service-Layer' -> 'DTOs' -> Client (Controller etc.).
And where should the queries take place? I think, in the Data-Layer but some say, it should be done in the service-layer. I think, the data-layer is responsible for querying the data and the service-layer is responsible for 'working' with the data.
Hope, I made my problems clear. Could provide code if it's necessary.
Thanks!

.net/sql server creating abstract object layer architecture

I have a a broad-scoped architecture question. I have multiple .net projects and multiple databases all interacting with each other across multiple web applications all for internal use. We would like to create an over-arching layer, somewhere, somehow, to create objects out of the data we query from a database. These objects would have functions/methods available to them, defined somewhere. The question is, what is the most efficient, flexible and advantages/disadvantages of way to define these objects.
For example. Let's say I have multiple HR-related tables: tbl_employees, tbl_departments. When I pull an employee into an application, I then have a whole bunch of code in that project that is associated with that employee, most predominately the functions I can do to that employee - functions such as edit_contact_info or increase_salary or add disciplinary_note. Similar with a department and functions such as edit_department, manage_department_employees. Some functions may include some logic in them, some may just redirect to a particular page.
My question is, how or where or what can I make to classify an employee entry or even series of employee entries as an "object", meaning whenever I pull that "object" somewhere, I also have the associated actions with it. Whether I am pulling the data as a list somewhere or even as part of a data-visualization, I would like to have the appropriate functions/methods available. Even if it is in a different project.
I am looking for different possibilities, not necessarily one answer and I am not entirely sure the best way to go about it although I have thought maybe about creating another layer within the database that holds all the "object" definition data or perhaps some definitely with the .net framework but I lack the expertise to know exactly what I am talking about. From my limited knowledge, I believe I am looking for some sort of ORM (maybe in-memory) implementation, but I am not sure how to get started exactly.
Does anyone have any ideas or a direction to point me in perhaps?
I appreciate any and all help!
Edit
To be clear, what I am trying to find is something I can apply on top of projects and applications I already have up and running and that are being used. I would like a way to implement this over-arching object functionality on top of pre-existing mvc projects

What's the pros and cons of using classes generated from WCF vs Creating your own model dll?

As of now, my project relies heavily on WCF which is linked to a database.
we use the classes generated from the database which are ORM if you will to do processing in our system.
i know that using DataSvcUtil, we can easily extract out all the classes and compile that as a DLL to be shared across our other systems.
But in our current project, we create another DLL which mirrors the WCF generated table class rather than using those classes directly.
So my question is there a best practice on these sort of things?
and
what's the pros and cons of these two methods?
are there other
methods?
thanks
Updates:
It seems like the consensus is on creating your own custom classes rather than relying on those that are created by WCF.
I am currently following this method, and as of now just using extension to create method to convert to the model and another one to convert it back to the type.
And having your own simpler class is good for extensibility and other stuff :)
I would suggest to still use WCF, but use compilied dll as client instead of service reference. This way you can still have your interface consistent, even if you will decide to change database in future. The pros of using DLL:
As your service will grow, users may occasionally start getting timeouts when trying to generate service reference
You will be safe from people having wrong service reference. When generating service reference some properties can be changed, thus users can generate potentially dead service reference
You will be protected from other IDEs generating slightly different references
It's a bit easier to be backwards compatible and to pinpoint the problem as you will be 100% sure that the way client is used is the same across users.
Cons of using DLL:
You will have additional reference
I'm not that familiar with WCF-- but I use Linq To Sql which I'm assuming generates the same types of classes (as does any ORM tool). I always create my own POCO classes which describe my domain model. I know there is a bit more work involved-- and you are then tasked with mapping your POCO classes with your generated classes. But I find it the best way to keep my domain classes pure. The generated classes can be somewhat complex with attributes describing the tables and columns which will be used to populate them. I like the generated classes because they make it easier for me to interact with the database-- but I always like the separation of having the simple domain classes-- it also gives me the flexibility to swap out database implementations.
It is better to have a separate dll as you do in your current project - decoupling is a best practice, generating the WCF DataContracts from the database is almost certainly not a good idea however - it can be used for the first shot but subsequent changes to your database should not be directly reflected in the web service.
One of the advantages of using WCF is that you can easily achieve decoupling through a service layer, if you were to distribute a dll compiled in the way you describe you would essentially be coupling all clients to your database representation.
Decoupling enables your ORM / database to be tweaked as necesarry without all you clients having to re-compile.
On the con side - decoupling like this is a bit slower to implement up front - so if you have a very small project can be overkill - but if you are working cross team or in any way distributed then it is essential.

Best solution: Extension Methods or a New Class?

So I'll try and play devil's advocate on this one...
Hypothetically there is a Framework which services 2 or 3 different web sites. 1 basic function of the framework is to deal with all calls to a certain DB. When making a DB call the websites call a Framework DataSource object and get a generic Framework data object back.
Now for the websites to retrieve properties/methods that are specific to it's needs we've got 2 solution options.
Create a new Class, extending or wrapping the generic data object,
exposing more domain friendly properties & keeping any domain
specific functionality inside of this new class.
Instead of creating a new class, create extension methods inside the Framework to service each of these websites. So everything is
contained inside the Framework and can be shared between web sites
if 1 day needed.
Just to clarify here would be examples:
Properties:
NewObject.GetSiteSpecificProperty
GenericObject.GetProperty("GetSiteSpecificProperty") or GenericObject.GetSiteSpecificProperty()
Methods
NewObject.DoSomethingSpecificToThisWebsite()
GenericObject.DoSomethingSpecificToThisWebsite()
So what solution would you opt for? 1 or 2?
Cheers.
In my opinion when designing a Framework you want to keep as much solution specific aspects out of the Framework and have the calling entities handle that if possible.
Now I'm not sure quite how your framework will be used or by how many different websites\projects but going with option (2) means that now whenever a new website is added the framework now needs to go do work in order to complete this functionality. The work of using the framework in a custom way should be handled by the websites not by the framework. If this framework ever grows to use 10 or even 100 websites, this becomes an absolute nightmare to maintain and your framework ends up looking much less like a framework and more like a solution specific product. Going with (1) is a much cleaner solution. Basically keep your framework reusable and solution-agnostic as possible.
If you are designing a framework that will be used by many different projects and is designed to be around for a while I'd recommend reading Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition)
Generally if you control the source of the class you're extending, I would extend it by inheritance instead of extension methods. Extension methods are great for adding custom functionality to classes you don't control, like .NET built-ins (String, Enum, IEnumerable) and other third-party objects. However, they can be hard to organize, and they're technically static methods, which you usually want to minimize in the interest of startup performance and memory footprint.
You may also find yourself in namespace and method resolution trouble by going with extensions; let's say you put the extension methods into site-specific libraries. If one site ever has to do the same site-specific thing as another, you must either include one site's library containing the extension method in the other site (exposing other things you may not want your code to know about, possibly containing duplicates of objects or extensions), or clone the code (violating DRY).
In my opinion, it's a better design to create a base class and use overrides for your site specific code. Although they could do it, it just doesn't seem like extension methods were meant for this type of operation.
Now if you're looking for a way to get different values using a shared framework on different websites, it seems like the web.config would suit that need. Each site will have it's own Web.Config, can you populate the specific property values you need in there, and have a single function to retrieve that value?
I would go for 1 because it keeps the framework general (and reusable) and specific functionality where it's used and where I would look if I were a maintenance programmer.
To share functionality I'd create a base wrapper class that the specific wrappers derive from.

Designing an API: Use the Data Layer objects or copy/duplicate?

Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.

Categories