C# Web API Project Sharing Strategy - c#

Currently working with a client that has a Web Api Project/Framework that they use for multiple clients. 98% of the code is reused, but they copy and paste the repository for each new client. After the copy and paste the only things that really change are Web.Configs and every now and then a couple extensions to the OOTB api. E.g. maybe they standup a custom module to the api api/rockets/ or they extend an existing api and add some new methods & actions.
I can't find any way to pull this off with .net. Currently I'm thinking I could solve this via git with forks, but I was wondering if there was any way to solve this with .net. Is there a way to extend an existing web project?

The git approach is one way of doing it, but I'd probably go for Nuget packages.
Extract everything that will be common to all solutions, even resources and make a package.
Take advantage of package versioning and so on. If you got a bug, fix that in the package and simply run a nuget-update in the project, or even just setup your continuous integration to rebuild and update at any dependencies change.

One option, would be to have a single web project for multiple clients that uses "Areas". That way you could turn on/off each are individually.
You could also put your common business logic into a Nuget package and import it for each customer. But it would be a really bad idea to fork the business logic every time. What would happen if you found a defect? You'd be forced to fix the same problem in N projects.

The approach from this one was really simple. We extracted everything into a common C# Library, converted all of the shared stuff to git submodules. We then used Autofac Multitenant to register some client specific overrides. It was actually really easy.

Related

Separating existing web project

I have inherited an existing .Net/angularJS project. We have a need moving forward to allow customization per client, while still maintaining synchronization through version control (in this case, git).
I'm not a .Net developer--my .Net experience is limited to writing a service a couple of years ago, starting the BrowserStack tests for the project, and the occasional foray for code review type activities. I'm primarily a Flash/Flex developer with a fair amount of ASP Classic and some PHP experience.
My preliminary research seems to indicate that I can do what I need to do with a git subtree, but I need to find where the seams should be to separate out the custom stuff from the shared code. Right now, the HTML and JS are in the same directory as the web services. My first order of business will be to separate those out, but I don't completely understand
Why everything's all in one place to begin with
What the implications are of moving things (project settings, paths, etc.)
When I wrote the service way back, I do remember that we had to scrap the service because the server we had the site on didn't support that version of .Net and it wouldn't work across domains so I could host the service on a server where it would work. I know that things have changed and there's now a way to allow that, but I figure that's the sort of problem I should be looking to avoid as I do this.
I figure I can't be the first person needing to make this kind of separation in a project I think started from the monolithic web project template, but because of a short deadline and a lack of knowledge of .Net, I'd feel better if someone could point me in the right direction or at least alert me to some of the gotchas I should plan to encounter.
Are you Trying to decouple the Projects. If so than this might be a good help.
http://www.codeproject.com/Articles/439688/Creating-ASP-NET-application-with-n-tier-architect
One of my recent project was almost the same that you mentioned above, So I ended up scrapping the old version and Create a brand new Project and Decoupled the related stuffs in the solution.
The best way of understanding stuff is to make sure you seperate the Client Side (Javascript/Htmls/CSS) and Server Side (EF/SP Calls/DTOs etc) by creating different project to the same solution.
Hope this Helps.
So I kept digging, and I finally found a pair of tutorials that address exactly this issue.
Creating an empty ASP.Net Project Powered by Angular JS Using Visual Studio
AngularJS Consuming ASP.NET Web API RESTful Services
In a nutshell, you copy the client's url from the properties panel to the service properties panel and add '/api' to the end of the URL and allow VS to create a virtual directory for you.
Now for my next trick, figuring out how to publish it...

Is it possible to set up a base project for use across multiple ASP.NET MVC projects?

My team lead handed this one to me, and I'm a bit stumped. We have just started using ASP.NET MVC for web development in our shop, and there are common design and functionality that we would like to be able to use across multiple sites.
So far, I have looked at creating a custom template with the common elements, but the downside to that is that updates to the template (as far as I can tell) do not automatically get pushed to projects created using that template. As having changes automatically update to the consuming projects is a requirement, custom templates won't work for me.
My question is, is it possible to set up a base project for use across multiple ASP.NET MVC projects, where updates to the base get propogated to the consuming projects? If you have any experience in this field, I would certainly appreciate some direction. My apologies if this question seems elementary to you, but this is my first real foray into ASP.NET MVC.
I've found that the best method for sharing resources between disparate projects is to create your own Nuget packages. These can contain anything from a class library with reusable classes, enums, extension methods, etc. to entire web applications complete with controllers, views, JavaScript, CSS, etc. The scope is entirely up to how much commonality you can abstract from your projects. You can then set up your own private Nuget repository to hold these so you don't have to publish them to the whole world. (Although, if you do create something that would benefit others as well, by all means do share on the official Nuget repo.)
Setting everything up is pretty trivial. I learned how to create Nuget packages and set up a private repo in a day. Here's some resources to get you started:
Official Nuget documentation for creating and deploying packages
Using the Package Explorer application to create packages via a GUI
Official Nuspec (the package manifest file) reference.
Hosting your own Nuget feeds
Alternate method for creating your own repository with SymbolSource integration
SymbolSource also offers private repos, remotely hosted on their servers, gratis. Some enterprise environments may not like having their code "in the cloud", but if you can get by with it, this is by far the easiest way to get going.
From experience, the company I work for has found that whilst there are common design and functionality elements across our project, the uncommon elements can be too broad which outweighs then need to have some form of base project. Using custom project templates also become a maintenance nightmare so avoid those.
Instead we've opted to document how a project should be setup for particular designs and it's up to the Team Lead to follow which bits are needed for the particular project they are working on.
If there a functional overlaps we've considered (but not actually yet done) creating a common library(s) that has they're own development lifecyle, and then setup our own NuGet Server for distribution of the common library to your other projects. We haven't done this yet mainly because again the differences between projects we have worked tend to be large enough for this not be warranted.
But from the sound of what you're describing, NuGet packages or something similar could be the way to go in your case.
While I don't think there's a way to set up a base project that everything else inherits from, you could quite easily set up a common library project that all others reference. It could include base classes for all the common things you'll be using (eg ControllerBase).
This way, updating the library project will allow new functionality to be added to all other projects. You could configure templates so that the common base classes are used by default when adding new elements.
Depending on how you reference the common library (compiled dll/linked project reference) you either get a stable link to a specific version or instant updates across all projects. I'd personally prefer to reference the common dll rather than the project, since this allows project A to be using an older version than project B. Updating A to the new version is trivial, but it gives you a level of separation so that if B requires breaking changes, you don't have to waste resources to keep A working.
Another added bonus is that checking out an old version from source control would still be guaranteed to work as it would be tied to the version of the library in use at the time it was created.

Package up a WCF client into an assembly for consumers

I'm hosting a WCF service within an organisation, and I was hoping to build a client into an assembly DLL to package up and give to anyone who wants to consume the service.
I could create a class library and simply add a service reference, build that, and distribute that. Any recommendations on an alternative approach?
I did something similar in my previous organization. I also had the additional requirement that the library should be COM visible so that a legacy C++ application could consume the API.
I went so far as not requesting the client to provide any WCF configuration, beside passing a bunch of parameters through the API (service URL, timeouts, etc...). The WCF was configured programmatically. I was in a very tightly controlled environment, where I exactly knew the clients of the library and could influence their design. This approach worked for me, but as they say your mileage may vary.
At my prior job we always did this. We'd have a library project that contained nothing but a SVCUTIL proxy generation and the config file to go with it.
This way our other projects could utilize this library and we'd only ever have one proxy generation. Great in a SOA model.
In your case you could then distribute this assembly if you wanted. Personally, I find the benefit greater for internal cases you control, but I suppose if you really felt charitable, distributing a .NET version for clients to use would be beneficial.
What's the service host going to be? If it's going to be an HTTP based one, putting it into an ASP.NET app makes a lot of sense. Otherwise, yeah, fire up the class library.
UPDATE based on comment
The client packaging really depends on what the receiver is going to do with it. If you're targeting developers, or existing in-house apps, then the library is a great option (though I'd probably wrap it in an .msi to make the experience familiar for users). If there needs to be UI then obviously you'll want to think about an appropriate UI framework (WPF, Silverlight, WinForms, etc)
I would simply provide a library that contains all the required contracts. That's it - they can write their own client proxy.
Do your users know how to use WCF? If not, include a proxy class that instantiates a channel and calls the service.
I don't really see any point in providing an assembly that just includes code generated by svcutil. Why not just give your users a WSDL and then they can generate that code themselves? Distributing boilerplate doesn't seem like a great idea.

Easiest way to refactor package in C# or Java?

I'm very annoyed by C# or Java refactoring of namespaces or packages. If you referenced in many classes a class in a common package used in many independent projects and if you decide to just move that package as a package child of the current parent package you have to modify all clients just because you cannot use generic imports like this
import mypackage.*
which would allow refactoring without impacting clients.
So how do you manage to do refactoring when impact can be so big for such a small change ?
What if it's client's code not under my control am I stuck ?
Use an IDE with support for refactoring. If you move a java file in Eclipse, all references are updated. Same for rename, package name changes, etc. Very handy.
It sounds like your asking about packages that are compiled and deployed to other projects as for instance, a jar file. This is one reason why getting your API as correct as possible is so important.
How to Design a Good API and Why it Matters
I think that you could deprecate the existing structure and modify each class to be a wrapper or facade to the new refactored class. This might give you flexibility to continue improving the new structure while slowing migrating projects that use the old code.
imagine someone doing an import like import com.* and if it was like what you wanted it to be, it will load anything and everything in a com package which means zillions of classes are going to be imported, and then you will complain about why it is so slow, why it requires too much memory......
In your case, if you use a IDE, that will take care of most of the work and will be very easy but you will still need to deploy new executables to your clients as well if your application architecture requires.

Proper API Design for Version Independence?

I've inherited an enormous .NET solution of about 200 projects. There are now some developers who wish to start adding their own components into our application, which will require that we begin exposing functionality via an API.
The major problem with that, of course, is that the solution we've got on our hands contains such a spider web of dependencies that we have to be careful to avoid sabotaging the API every time there's a minor change somewhere in the app. We'd also like to be able to incrementally expose new functionality without destroying any previous third party apps.
I have a way to solve this problem, but i'm not sure it's the ideal way - i was looking for other ideas.
My plan would be to essentially have three dlls.
APIServer_1_0.dll - this would be the dll with all of the dependencies.
APIClient_1_0.dll - this would be the dll our developers would actual refer to. No references to any of the mess in our solution.
APISupport_1_0.dll - this would contain the interfaces which would allow the client piece to dynamically load the "server" component and perform whatever functions are required. Both of the above dlls would depend upon this. It would be the only dll that the "client" piece refers to.
I initially arrived at this design, because the way in which we do inter process communication between windows services is sort of similar (except that the client talks to the server via named pipes, rather than dynamically loading dlls).
While i'm fairly certain i can make this work, i'm curious to know if there are better ways to accomplish the same task.
You may wish to take a look at Microsoft Managed Add-in Framework [MAF] and Managed Extensibiility Framework [MEF] (links courtesy of Kent Boogaart). As Kent states, the former is concerned with isolation of components, and the latter is primarily concerned with extensibility.
In the end, even if you do not leverage either, some of the concepts regarding API versioning are very useful - ie versioning interfaces, and then providing inter-version support through adapters.
Perhaps a little overkill, but definitely worth a look!
Hope this helps! :)
Why not just use the Assembly versioning built into .NET?
When you add a reference to an assembly, just be sure to check the 'Require specific version' checkbox on the reference. That way you know exactly which version of the Assembly you are using at any given time.

Categories