So I'll try and play devil's advocate on this one...
Hypothetically there is a Framework which services 2 or 3 different web sites. 1 basic function of the framework is to deal with all calls to a certain DB. When making a DB call the websites call a Framework DataSource object and get a generic Framework data object back.
Now for the websites to retrieve properties/methods that are specific to it's needs we've got 2 solution options.
Create a new Class, extending or wrapping the generic data object,
exposing more domain friendly properties & keeping any domain
specific functionality inside of this new class.
Instead of creating a new class, create extension methods inside the Framework to service each of these websites. So everything is
contained inside the Framework and can be shared between web sites
if 1 day needed.
Just to clarify here would be examples:
Properties:
NewObject.GetSiteSpecificProperty
GenericObject.GetProperty("GetSiteSpecificProperty") or GenericObject.GetSiteSpecificProperty()
Methods
NewObject.DoSomethingSpecificToThisWebsite()
GenericObject.DoSomethingSpecificToThisWebsite()
So what solution would you opt for? 1 or 2?
Cheers.
In my opinion when designing a Framework you want to keep as much solution specific aspects out of the Framework and have the calling entities handle that if possible.
Now I'm not sure quite how your framework will be used or by how many different websites\projects but going with option (2) means that now whenever a new website is added the framework now needs to go do work in order to complete this functionality. The work of using the framework in a custom way should be handled by the websites not by the framework. If this framework ever grows to use 10 or even 100 websites, this becomes an absolute nightmare to maintain and your framework ends up looking much less like a framework and more like a solution specific product. Going with (1) is a much cleaner solution. Basically keep your framework reusable and solution-agnostic as possible.
If you are designing a framework that will be used by many different projects and is designed to be around for a while I'd recommend reading Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition)
Generally if you control the source of the class you're extending, I would extend it by inheritance instead of extension methods. Extension methods are great for adding custom functionality to classes you don't control, like .NET built-ins (String, Enum, IEnumerable) and other third-party objects. However, they can be hard to organize, and they're technically static methods, which you usually want to minimize in the interest of startup performance and memory footprint.
You may also find yourself in namespace and method resolution trouble by going with extensions; let's say you put the extension methods into site-specific libraries. If one site ever has to do the same site-specific thing as another, you must either include one site's library containing the extension method in the other site (exposing other things you may not want your code to know about, possibly containing duplicates of objects or extensions), or clone the code (violating DRY).
In my opinion, it's a better design to create a base class and use overrides for your site specific code. Although they could do it, it just doesn't seem like extension methods were meant for this type of operation.
Now if you're looking for a way to get different values using a shared framework on different websites, it seems like the web.config would suit that need. Each site will have it's own Web.Config, can you populate the specific property values you need in there, and have a single function to retrieve that value?
I would go for 1 because it keeps the framework general (and reusable) and specific functionality where it's used and where I would look if I were a maintenance programmer.
To share functionality I'd create a base wrapper class that the specific wrappers derive from.
Related
For my specific context I control the target classes. They were auto-generated based on XSDs and have huge overlaps because they represent different versions of the same class.
Each version is a huge C# class of over 5.000 lines.
Support can't be dropped for old versions. This means we always need to be able to map the domain class to several different versions and back again. There are always small but breaking changes from version to version. More than 90% of the target class is always the same, even if the code is duplicated for each version.
Currently there is one big mapping for each format, which is a horror. There is so. much. duplicated. code. Furthermore, developers tend to make updates where they need it, and skip everything else, which means individual versions often go out of sync, meaning that one version will be updated to do something that other versions don't. This is also not ideal.
So my question to you is: What strategy can you use for this kind of mapping?
Given the size of your classes, and having to maintain multiple versions, I'd suggest serializing and serializing. Assuming that they otherwise approximate one another, JsonConvert doing JsonConvert.Deserialize<TargetClass>(JsonConvert.Serialize(sourceClass)) should solve it, though I've not worked with such large models to have any idea on how performant it is.
Alternatively, you could use a t4 template (if you're not in .net Core anyway) to generate the mapping using reflection into a common method or whatever.
As far as preventing the Developer problem... Interfaces, base classes that define as much of this centrally as possible. Code reviews to ensure that developers are making changes to the lowest layer they possibly can.
You can do some tricky things with inheretence with static using statements, I'm pretty sure.
Something dumb like
using OldVersion = path.to.the.class.CantRenameThis;
class CantRenameThis : OldVersion
We ended up with a solution that achieved the main targets:
Decent compile-time safety to spot mapping errors
De-duplication of code
No messing with the auto-generated code
We did this by exploiting that the auto-generated classes are generated as partial. That means we can extend them.
We ended up creating hierarchies of interfaces/classes looking like this:
ClassV1 implements IClassVerySpecificV1
ClassV2 implements IClassVerySpecificV2
IClassVerySpecificV1 implements SpecificA, SpecificB, SpecificC and IClassBasic
IClassVerySpecificV1 implements SpecificB, SpecificC, SpecificD and IClassBasic
A mapper would then look like:
ClassV1Mapper requires a SpecificAMapper, SpecificBMapper, SpecificCMapper and ClassBasicMapper
ClassV2Mapper requires a SpecificBMapper, SpecificCMapper, SpecificDMapper and ClassBasicMapper
This way we could map 90% of everything by just throwing everything that belongs to IClassBasic into a ClassBasicMapper.
We did run into some issues however:
As you can already guess, we end up with a LOT of interfaces. More than you want.
Sometimes a field exists across versions, but has different (enum) values. Our domain model would have the superset, with an attribute specifiying which values were valid for which versions.
MS stack developer historically.
I have committed to retooling to the following stack
angular -> ms web.api2 -> C# business objects -> sql server
Being old, I develop the database from requirements and use Codesmith to generate the business logic layer. (yes, I have heard of entity framework. even tried it once).
As I embrace Angular and web API 2
I find that Angular wants me to write a model on the front end. This seems to be just a data structure, I cant even add helper methods to it
So I also often write a class with helper methods that takes an instance of the model. Kind of ugly,but it does marry structure and logic.
I find that Web API2 wants me to write a model. This again seems to be just a data structure. I am exploring the dynamic data type, but really this doesn't buy me much. Instead of writing a class, I'm writing a mapping function.
The question is this:
Is there any way around having 3+ copies of each class spread across the stack?
Codesmith is a very capable code generator... it can gen multiple files... but...
If its just a couple data members, and 3 places, I can copy paste edit and get it done.
Just seems to me that now committing to keeping a data structure in synch in 3 different environments is setting oneself up for a lot of work.
I have spent the last 15 years trying to shove as much code as I can into a framework of inheritable classes so I can keep things DRY.
Am I missing something? Are there any patterns that can be suggested?
[I know this isn't a question tailored for SO, but it is where all the smart people shop. Downvote me if you feel honor bound to do so.]
Not entirely familiar with how CodeSmith generates it's classes, but if they are just plain-old-CLR-objects that serialize nicely, you can have WebApi return them directly to your Angular application. There are purists that will frown upon this, but depending on the application, there may be a justification.
Then, in the world of Angular, you have a few options, again, depending on your requirements/justification, and your application - again, purists will definitely frown upon some of the options.
create classes that match what's coming down from the server (more correct method)
Treat everything as "any", lose type safety, and just access properties as you need them i.e. don't create the model. (obviously less correct method)
find a code generation tool that will explore API end points to determine what they return, and generate your typescript classes for you.
Personally, using Entity Framework, I (manually) create my POCO's for database interraction, have a "view"/DTO class that WebAPI would then send back to the client, and a definition of the object in Typescript, but I am a control freak, and don't like generated code.
As of now, my project relies heavily on WCF which is linked to a database.
we use the classes generated from the database which are ORM if you will to do processing in our system.
i know that using DataSvcUtil, we can easily extract out all the classes and compile that as a DLL to be shared across our other systems.
But in our current project, we create another DLL which mirrors the WCF generated table class rather than using those classes directly.
So my question is there a best practice on these sort of things?
and
what's the pros and cons of these two methods?
are there other
methods?
thanks
Updates:
It seems like the consensus is on creating your own custom classes rather than relying on those that are created by WCF.
I am currently following this method, and as of now just using extension to create method to convert to the model and another one to convert it back to the type.
And having your own simpler class is good for extensibility and other stuff :)
I would suggest to still use WCF, but use compilied dll as client instead of service reference. This way you can still have your interface consistent, even if you will decide to change database in future. The pros of using DLL:
As your service will grow, users may occasionally start getting timeouts when trying to generate service reference
You will be safe from people having wrong service reference. When generating service reference some properties can be changed, thus users can generate potentially dead service reference
You will be protected from other IDEs generating slightly different references
It's a bit easier to be backwards compatible and to pinpoint the problem as you will be 100% sure that the way client is used is the same across users.
Cons of using DLL:
You will have additional reference
I'm not that familiar with WCF-- but I use Linq To Sql which I'm assuming generates the same types of classes (as does any ORM tool). I always create my own POCO classes which describe my domain model. I know there is a bit more work involved-- and you are then tasked with mapping your POCO classes with your generated classes. But I find it the best way to keep my domain classes pure. The generated classes can be somewhat complex with attributes describing the tables and columns which will be used to populate them. I like the generated classes because they make it easier for me to interact with the database-- but I always like the separation of having the simple domain classes-- it also gives me the flexibility to swap out database implementations.
It is better to have a separate dll as you do in your current project - decoupling is a best practice, generating the WCF DataContracts from the database is almost certainly not a good idea however - it can be used for the first shot but subsequent changes to your database should not be directly reflected in the web service.
One of the advantages of using WCF is that you can easily achieve decoupling through a service layer, if you were to distribute a dll compiled in the way you describe you would essentially be coupling all clients to your database representation.
Decoupling enables your ORM / database to be tweaked as necesarry without all you clients having to re-compile.
On the con side - decoupling like this is a bit slower to implement up front - so if you have a very small project can be overkill - but if you are working cross team or in any way distributed then it is essential.
This question already has answers here:
Instancing a class with an internal constructor
(9 answers)
Closed 8 years ago.
I am working with the Braintree API for .NET to take care of processing payments. Their business does a fine job of processing payments and the API wrapper works for straightforward use. However, the provided API wrapper begins to fail quickly upon closer investigation or more strenuous use; for example, it contains hand-rolled enums. My problem comes with unit testing my code that uses this wrapper.
In order to do this, I essentially need to mock up my own 'fake' Braintree gateway that will have some known values in it, generate errors when requested, etc. My plan of attack was to override the functionality of the Braintree API wrapper and reroute the requests to a local in-memory endpoint. Then I could use dependency injection to link up the proper gateway/wrapper at runtime.
Initially, it seemed to be going swimmingly: despite the sins against software engineering that had been committed in the API wrapper, every method that I would need to override was miraculously marked virtual. However, that came to a screeching halt: almost constructor in the API wrapper is marked internal. As such, I can neither inherit off of these classes nor create them at whim to store for testing.
An aside: I grok internal constructors, and the reasons that one would legitimately want to use them. However, I have looked at the source code for this, and every internal constructor performs only trivial property assignments. As such, I am comfortable in claiming that a different coding practice should have been followed.
So, I'm essentially left with three options:
Write my own API wrapper from scratch. This is obviously doable, and holds the advantage that it would yield a well-engineered infrastructure. The disadvantages, however, are too numerous to list briefly.
Pull the source code from the API down and include it in my solution. I could change all of the internal constructors to be whatever I need to make them work. The disadvantage is that I would have to re-update all of these changes upon every subsequent API wrapper release.
Write wrapper classes for every single object that I need to use in the whole API wrapper. This holds the advantage of not altering the provided source code; the disadvantages are large, though: essentially rewriting every class in the wrapper three times (an interface, a Braintree API wrapper adapter, and a testable version).
Unfortunately, all of those suck. I feel like option 2 may be the least bad of the options, but it makes me feel dirty. Has anyone solved this problem already/written a better, more testable wrapper? If not, have I missed a possible course of action? If not, which of those three options seems least distasteful?
Perhaps this stackoverflow entry could help
Also, A random blog entry on the subject
Since you're not testing their API, I would use a Facade pattern. You don't need to wrap everything they provide, just encapsulate the functionality that you're using. This also gives you an advantage: If you decide to ditch that API in the future, you just need to reimplement your wrapper.
I'm in the process of starting a new project and creating the business objects and data access etc. I'm just using plain old clr objects rather than any orms. I've created two class libraries:
1) Business Objects - holds all my business objects, all this objects are light weight with only properties and business rules.
2) Repository - this is for all my data access.
The majority of my objects will have child list in and my question is what is the best way to lazy load these values as I don't want to bring back unnecessary information if I dont need to.
I've thought about when using the "get" on the child property to check if its "null" and if it is call my repository to get the child information. This has two problems from what I can see:
1) The object "knows" how to get itself I would rather no data access logic be held in the object.
2) This required both classes to reference each other which in visual studio throws a circular dependency error.
Does anyone have any suggestions on how to overcome this issue or any recommendations on my projects layout and where it can be improved?
Thanks
To do this requires that you program to interfaces (abstractions over implementations) and/or declare your properties virtual. Then your repository returns a proxy object for those properties that are to be loaded lazily. The class that calls the repository is none the wiser, but when it tries to access one of those properties, the proxy calls the database and loads up the values.
Frankly, I think it is madness to try to implement this oneself. There are great, time-tested solutions to this problem out there, that have been developed and refined by the greatest minds in .NET.
To do the proxying, you can use Castle DynamicProxy, or you can use NHibernate and let it handle all of the proxying and lazy loading for you (it uses DynamicProxy). You'll get better performance than out of any hand-rolled implementations, guaranteed.
NHibernate won't mess with your POCOs -- no attributes, no base classes; you only need to mark members virtual to allow proxy generation.
Simply put, I'd reconsider using an ORM, especially if you want that lazy loading; you don't have to give up your POCOs.
After looking into the answers provided and further research I found an article that uses delegates for the lazy loading. This provided a simpler solution than using proxies or implementing NHibernate.
Here's the link to the article.
If you are using Entity Framework 4.0, you will have support for POCO's with deferred loading & will allow you to write a generic repository to do data access.
There are tons of article online on generic repository pattern with EF 4.0
HTH.
You can get around the circular dependency issue if your lazy loading code loads the repository at runtime (Activator.CreateInstance or something similar) and then calls the appropriate method via reflection. Of course there are performance penalties associated with reflection, but often turn out be insignificant in most solutions.
Another way to solve this problem is to simply compile to a single dll - here you can still logically separate your layers using different namespaces, and still organise your classes by using different directories.