Saml SSO authentication with only restful calls and no libraries - c#

We have a well established .NET framework based SAAS web application which supports Oauth2 to multiple providers, and we want to implement the same with SAML/SAML2. The current implementation of Oauth2 does all the necessary steps using resful http get and post calls plus the one redirect.
Looking at existing libraries, they all seem to want changes in "startup.cs" or wherever your startup code is and they add code to web.config. So they are building themselves into the core of your project. We don't like libraries that do that, we don't want to run code that isn't needed and our code has to pass some pretty deep auditing for defense apps, so we'd prefer not to use external libraries from nuget, although we would be prepared to include some libraries from MS if they do not interfere with the app infrastructure (startup and web.config).
Only a small minority of our customers will use SAML since most will use direct logins or Oauth2 providers.
So how can we add SAML with pure restful calls / redirects and not using third party libraries ? I hoped we could find examples where you build up a piece of XML and send it via http post or via a querystring, but after several hours I am not finding this.
TIA

SAML can actually be quite complicated to implement properly: you WILL want library support for this. Even more, I see this:
we don't want to run code that isn't needed [because] our code has to pass some pretty deep auditing for defense apps
Mostly, this is the right attitude from a security standpoint. There are too many examples now of a threat actor getting malicious code into upstream libraries... thankfully not as much in the .Net ecosystem, but enough generally to be a legitimate security concern. But for authentication specifically avoiding libraries is exactly the wrong approach. It's actually significantly worse for security to try to do this on your own.
In fact, using a library for authentication should make security auditors happier, because it's just so easy to implement authentication code that seems to work correctly — passes all your unit and integration tests and looks like it's working — where in fact you have subtle bugs that result in finding out a year later you were hacked six months ago. Using a purpose-built, battle-tested library helps you avoid that scenario.
I've been saying this since at least 2010. The idea is when a flaw is discovered (because they always exist) it's found on someone else's product first. But because they're using the same library a patch is created, distributed, and applied before your own product ever has a chance to be breached. Unfortunately, the reality is these audits are often as much about ticking the box as cheaply and painlessly as possible as they are about actually improving things, so I understand where it can be hard to get this through.
Personally, I have successfully implemented SAML authentication using the ITfoxtec library for two different apps. However, it's not something I work on often enough to have broad experience to comment on the relative quality of this option vs alternatives other than to say I was able to make it work.
The main thing for this library is the source code is available, and fairly easy to browse and understand, which should further make auditors happy. Moreover, because the source is available with a very permissive license you can fork the project and include the code directly as part of the main build. This should further make it easier to audit.

Just to add to what Joel said, I think you'll find it's more cost effective to use a 3rd party library (either open source or our commercial library) rather than implementing something yourself. Also, security is important, obviously, so it's much safer to use a battle-hardened, proven implementation that's been used in production by many others for many years.
I'm not sure about other libraries, but our SAML library for ASP.NET doesn't require any code in the start-up class or changes to web.config. Perhaps you're referring to .NET 6 etc where's it's common practice to hook into the dependency injection system in the start-up code. In our library, the SAML code is executed only when required as part of a SAML SSO flow so there's no additional overhead for non-SAML authentication.
The SAML protocol is more involved than OAuth2. I suggest taking a look at the SAML v2.0 specification documents to get a feel for the effort involved.

Related

Is it good practice to create an API to handle calls to multiple APIs for integration

Brief overview, I am working with Visual Studio 2017 and .Net Core 2.1. I am about to begin development on a website which will handle integrating 3 existing pieces of software which our company uses.
I have created WCF services already for use by some of the applications I have developed, but for this project, there are multiple APIs which I will be utilizing. It's quite possible that I may need to use these APIs in other projects down the road.
I apologize if this is an opinionated question, but here it goes, do you think it is good design to develop one central API which wraps all the calls to the integrated system APIs? My thoughts were that in this way, I only have to write the code once for making the desired API calls and I can then add to this API as I see fit moving forward, ie. another system API is needed.
Please feel free to give advice, I am still learning and appreciate constructive advice. I am using this to get started on building my API using .Net Core 2.1, https://learn.microsoft.com/en-us/aspnet/core/tutorials/first-web-api?view=aspnetcore-2.1
If the APIs are related then yes it does make sense to create a single assembly to call those for you and deal with the responses. You'd then consume that assembly in all your other apps.
However, if the APIs are completely different and require different set ups, then it may make more sense to create an assembly wrapper for each to keep the concerns separate.
You don't want to confuse the APIs. If anyone was to look at your code or assembly, they should be able to say that it relates to what it describes and not have to guess.

How would I generate code based on the Google API?

Summary and Question
I'm looking to generate code in C# to prevent significant repetition and wrap the Google APIs in a way like they do themselves, as stated on their .Net Client library page. Edit: Their generator is written in Python, apparently. I will continue to investigate other .Net options.
Where should I focus my attention, CodeDOM, Roslyn or something else? Should I not be considering Code Generation at all - and if so, what alternative track should I take to properly handle this situation?
Details
I am working on writing a wrapper for the Google .Net APIs to make a Google API library for PowerShell (for any and all Google APIs). I already have it working on three of the APIs, but since my project handles all of the authentication (and storage thereof) and other things like pagination, I have to basically wrap each API method call to work with my own authentication so that the user doesn't have to worry about it. This leads to a lot of repetitious coding encapsulating methods that already exist in the .Net Libraries:
public Data.Asp Get(string userKey, int codeId)
{
//I have to wrap their get method with my own using GetService(), for example
return GetService().Asps.Get(userKey, codeId).Execute();
}
Since this is all patterned on information that exists either through the Google Discovery API or through the underlying client libraries, I feel like there should be some way to generate the code and save my hands some trouble.
Some Background and Related Info
On the main page for the Google API .Net Client libraries it is stated:
The source code for the individual Google APIs is programmatically generated using the Discovery API.
I would like to do something similar, though I have no idea where to focus my time and research. I've looked up CodeDOM (and the inherent limitations), Roslyn as well as some differences between the two. I've also checked out the T4 Text Templates for Visual Studio.
To be clear, I am not looking to generate code at runtime as I would with something like Reflection, I am looking to generate bits of a library - though I'm not sure if I am looking for active or passive generation yet.
I work at Google on the .NET client libraries (among other things). Your question is pretty far reaching, but here is the general idea:
The metadata for describing "most" Google APIs is through a discovery document. That describes the methods and types the API has.
Client libraries for accessing Google's APIs then are generated, like you point out, from a Python library. (Using Django as a templating language, specifically.)
Once the code is generated for each Google API, we invoke MSBuild, package the binaries, and deploy them to NuGet.
As for your specific question about how to generate code, I would recommend you build two separate components. The first is something that will read and parse the discovery document, the second is the component that will emit the code.
For the actual code gen, here are some personal opinions:
The simplest thing to do would be to use a text-based templating language. (e.g. Django or just write your own.)
CodeDOM is an interesting choice, but probably much more difficult to use than you want. It is how Visual Studio does some of its codegen, e.g. you describe the code and CodeDOM will emit C#, VB, MC++ to match your desires. However, since you are only focusing on C#, the benefit of CodeDOM supporting multiple languages isn't useful.
Roslyn certainly is a cool, new technology, but that probably won't be of much use. I believe Roslyn has the ability to dynamically model code and round-trip the AST to disk. But that is probably overkill, since you aren't trying to build a general-purpose C# codegen solution, and instead just target generating code that matches the API discovery document.
So I would suggest a basic text-based solution for now, and see how far that can get you. If you have any other questions feel free to message me or log an issue on the GitHub issue tracker.

Is WSSF wise to use today on a new WCF service layer?

I'm at a customer where I successfully developed and deployed a WCF service layer (compiled against .NET 4.5). It works perfectly and everything is dandy.
However, we just got an additional requirement - I'm supposed to rebuild (or at least redesign) the layer to incorporate WSSF. There's no old functionality that we'd need to integrate with and all operations in the services are based on executing SPs in a DB.
Should I do that or is it wiser to argue against it? I'm not certain because I've never worked using WSSF and I got virtually no explanation as to why we should at this particular workplace (which could be that they don't want us to know as well as that simply don't know themselves).
My worries are based on, but not limited to, the following.
The latest release is from August 2010.
There's nothing listed in documentation section.
The license seems to be in conflict with commercial activities.
WSSF isn't widely used as technology today (or is it?!).
The purpose of WSSF is to WCF-fy old service layer (or isn't it?!) only.
Especially #4 and #5 are not the strongest statements in my arsenal at the moment so I'll gladly stand corrected, should anybody have a few wise words to contribute with on the subject.
Short story is that it doesn't look good. From MSDN: Web Service Software Factory 2010:
The Web Service Software Factory is now maintained by the community
and can be found on the Service Factory site. This content is outdated
and is no longer being maintained. It is provided as a courtesy for
individuals who are still using these technologies. This page may
contain URLs that were valid when originally published, but now link
to sites or pages that no longer exist. Retired: November 2011
1) So, it looks like it's totally being run by the community. However, looking at the discussion forum there aren't many postings and quite a few have no responses.
2) I find it's fairly common for the documentation tab to be empty at codeplex but there is frequently documentation but not on the documentation tab.
3) In terms of licensing Ms-PL is quite permissive so I wouldn't imagine there would be any issues.
4) Not to belittle it but I don't think it was/is very popular. Definitely not a standard.
5) The intent of the service factory was to provide guidance -- both written and code based. See Web Service Software Factory for a discussion.
WSSF was a tool that incorporated best-practices for building WCF services. It's been years since I've used it, but basically I recall a wizard that asked several (actually lots of) questions about the service (contract), data (model), etc. What it would produce is a nicely organized solution with several projects with proper naming conventions, verbose declarations like adding IsOneWay=true/false to [OperationBehavior]'s or IsRequired=true/false, Order=n, etc. to [DataContract]. In other words, it generated very verbose code that most of us blow off until we need it.
It did more though, such as structuring your solution so that service contracts were in one project, data contracts in another, and implementation in yet another. It created test projects (I believe). so, very granular layout of the solution. I remember the simplest of services would result in about 6-7 projects in the solution. It was a little intimidating at first until you poked through the code it generated.
Another cool feature it had (at the time many were asking for) was a way to do contract first development. Given an existing web service metadata, you could constuct a new service solution.
Anyway, once it was completed, you just had to do essentially provide implementation for the methods. Personally, I never really embraced it for services development. But, at the time, I appreciated it and often referred customers to it who were new to services development because I knew it would get them off to a proper start.
To comment on your worries though...
That's correct, and it is not getting any resources to update it.
Actually, there is quite a bit of documentation. Just move over to the Home tab and you will see links to it.
Not sure about this. The code it generates is yours. You still have to compile it and it's yours to maintain going forward. No different than any other code-generation tool (as far as I know).
Nope, it is not. Also, consider the time when this was developed, .NET Framework 2 - 3.x. There's been a lot added to WCF since then. There's also been some new guidance on service development. If you're using some of the newer features added in .NET Framework 3.5SP and beyond (which you probably are), then this definitely is not something I would recommend using.
Again, that was one of the nice features (contract first development). But, that really wasn't the main idea. It was a tool to build out the framework for new services too. In fact, new service development was the original motivation of the tool as I recall. Once you took the time to go through the dialogs, you had a really nice solution to start building on.

What is the best way expose key classes/methods my core API to 3rd party developers?

I have an application that I have designed and this app has a pretty decent core dll that contains an API that my main view's exe uses. I would like to allow other developers to access this core dll as well but I don't want them to have as much access as me since it would be a security risk. What is the standard way of exposing my core dll? Are there any particular design patterns I should be looking at?
I'm using C#
Edit: my question was a little vague so here is some clarification
My program is deployed as a windows exe which references the core.dll. I want other people to create extensions which dynamically get loaded into my program at start up by loading dlls in the /extensions directory. The 3rd party dlls will inherit/implement certain classes/interfaces in my core.dll. I only want to give 3rd parties limited access to my core but I want to give my exe additional access to the core.
I should mention that this is the first time I have written a program that imports DLLs. Perhaps this whole method of allowing users to add extensions is wrong.
How do I modify/expose my API for
other developers?
To deliberately allow other developers to work with an API you've built touches on many things, which can be broken into two areas:
Resources (documentation, samples, etc) that makes it easier for them to understand (yes - basically an SDK).
Architecting, constructing and deploying your solution so that it's easy to actually work with.
Examples include:
By packing it in a way that suits re-use.
By using naming conventions and member names that others can easily follow.
Documentation, samples.
Providing the source code (as open source) if you're happy for them to modify it.
I would like to allow other developers
to access this core dll as well but I
don't want them to have as much access
as me since it would be a security
risk.
Ok, so this gets us right into the second area - the actual solution.
The problem you have is not a trivial one - but it's also quite do-able; I'd suggest:
Looking into existing material on plugins (https://stackoverflow.com/questions/tagged/plugins+.net)
Personally, I've found using attributes and Dependency Inversion to be a great approach.
There's also stuff like the Managed Extensibility Framework which you should consider.
The big issue you face is that you're into serious architecture territory - the decisions you make now will have a profound impact on all aspects of the solution over time. So you might not be able to make an informed decision quickly. Still - you have to start somewhere :)
The "design patterns" in terms of an API are more related to things like REST.
I don't want them to have as much
access as me since it would be a
security risk
Then i would (for the sake of maintenance), layer on top of the core DLL extra logic to prevent this.
The thing is, the "clients" call the API, not the Core DLL.
"How" the API accesses the Core DLL is under your full control. Just only expose operation contracts that you wish.
Since you're using C#, I would look at Microsoft's Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries and use FxCop to in-force many of them (latest version here). This won't be all you'll likely need, but it would help put you in the right direction.
Also, take a look at the freely available distillation of Framework Design Guidelines by the same author.

What am I missing about WCF?

I've been developing in MS technologies for longer than I care to remember at this stage. When .NET arrived on the scene I thought they hit the nail on the head and with each iteration and version I thought their technologies were getting stronger and stronger and looked forward to each release.
However, having had to work with WCF for the last year I must say I found the technology very difficult to work with and understand. Initially it's quite appealing but when you start getting into the guts of it, configuration is a nightmare, having to override behaviours for message sizes, number of objects contained in a messages, the complexity of the security model, disposing of proxies when faulted and finally moving back to defining interfaces in code rather than in XML.
It just does not work out of the box and I think it should. We found all of the above issues while either testing ourselves or else when our products were out on site.
I do understand the rationale behind it all, but surely they could have come up with simpler implementation mechanism.
I suppose what I'm asking is,
Am I looking at WCF the wrong way?
What strengths does it have over the
alternatives?
Under what circumstances should I
choose to use WCF?
OK Folks, Sorry about the delay in responding, work does have a nasty habit of get in the way sometimes :)
Some clarifications
My main paint point with WCF I suppose falls down into the following areas
While it does work out of the box, your left with some major surprises under the hood. As pointed out above basic things are restricted until they are overridden
Size of string than can be passed can't be over 8K
Number of objects that can be passed in a single message is restricted
Proxies not automatically recovering from failures
The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it.
Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembly, but it's full of rubbish and certain code generators choke on it.
I know the world moves on, I've moved on a number of times over the last (ahem 22 years I've been developing) and am actively using WCF, so don't get me wrong, I do understand what it's for and where it's heading.
I just think there should be simpler configuration/deployment options available, easier set-up and better management for configuration (SQL config provider maybe, rather than just the web.config/app.config files).
I use WCF all the time now and I share your pain. It seems like it was grossly over-engineered, but we are going to be stuck with it for a long, long time so I'm trying to learn it.
One thing I am certain about, XML sucks. I've had nothing but problems using XML to control it and have since switched to handling everything via code.
The concerns you listed were:
Size of string than can be passed can't be over 8K
Number of objects that can be passed in a single message is restricted
Proxies not automatically recovering from failures
The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it.
Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembly, but it's full of rubbish and certain code generators choke on it.
here's my take:
(1) addressed a valid concern that customers had with ASMX. It was too wide-open, with no way to easily control it. The 8k limit is easily lifted if you know where to look. I guess you can count that as a surprise, but it's more of a one-time thing. Once you know about it, you can lift it and be done with it forever, if you choose.
(2) is also configurable.
(3) is known, but there are boilerplate ways to work around this. The StockTrader code for example, demonstrates a proven pattern. You can re-use the code in your own app. Not sure if this is fixed in WCF for .NET 4.0. I know it was an open request.
(4) The config is a beast. This is a concern for a lot of people. The problem here is that WCF is so flexible, and config of all of that flexibility is exposed through xml files. It can be overwhelming. An approach that seems to work is to take it in small bites, as you need it.
(5) I don't understand.
I vastly prefer ASP.NET MVC and Web API over WCF. If I had to summarize WCF to a developer who was just being introduced to it, I would say, "WCF is a well-meaning attempt to replace over-engineered, Java EE style RPC development." Unfortunately, many of the decisions made require you to become an expert in configuring low level, unimportant items (message sizes, timeouts, uninteresting protocol elements, etc.) while abstracting absolutely critical pieces (URL design, parameter serialization, response serialization, etc.). The difference in productivity and aggravation between teams I know using WCF vs. Web API is night and day.
To come clean a little: I have always hated the core concept of .NET Remoting. I feel that developers need a thorough understanding of the resource structure of their application and how these resources are serialized. Furthermore, the use of the "POST" verb for simple data retrieval is worrisome in a read heavy application that needs to scale.
I'll address the rest of your issues after clarification. In the meantime, I can address your question on when you should choose to use WCF: always.
WCF is the replacement for the old ASMX technologies, including WSE. It is also the replacement for .NET Remoting. It is the only technology upon which high-level communications features in .NET will be based for the forseeable future.
For example, consider Windows Azure. It was not inevitable that the new concept of "cloud computing" would have its communications aspects covered by WCF. Yet, WCF was flexible enough to be extended to cover those cases, with very little change in code.
If you're having trouble with WCF, then you'd do well to make sure Microsoft knows about it. WCF is the present and future of web service and other service-oriented development in .NET, so they've got a very strong incentive to listen to you and resolve your pain points. Either contact them directly through Connect, or ask questions here on SO (tag with WCF, please), and a lot of people will help you.
Biggest advantage of using WCF from a programmer's point of view: separates the definition of exposed services (operations, contracts, etc.) from the protocol's specific details, unlike ASMX where you expose a class as a web service directly in the code using attributes. Using a real example of mine: we where able to easily switch the transport protocol between web services and named pipes, whatever suited better the deployment and performance needs, without changing a line of code.
WCF is intended to SOA methodologies. Work professionally using it is a nightmare. I delivered a SOA solution using WCF as tool and hell, hundreds configurations and hidden tips! My past distributed solution using old style Web Services and Remoting were more stable. I've spent days working out the solution for the error "The underlying connection was closed: An unexpected error occurred" which makes no sense to happen for one method among 4 in the same contract. I'm very disappointed. It took me back through time where .net was first introduced with lots of promises and when we got hands on, hell, log problems came up!
To address the problem of maintenance nightmare of application config, some standard like UDDI or WS-Discovery exist, WS-Discovery will be supported by WCF in .NET 4.0.
Keeping the configuration of the
interfaces in code rather than moving
to explicitly defined interfaces in
XML, which can be published and
consumed by almost anything. I know we
can export the XML from the assembley,
but it's full of rubbish and certain
code generators choke on it.
Can you be more explicit ? I think you are talking about service behavior configured in code.
You can easily code behavior extensions to configure what your are talking about in config file instead of code BUT I think that if microsoft didn't do that there is a good reason.
For example a service with this behavior :
[ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall, ConcurrencyMode=ConcurrencyMode.Single)]
The implementation knows that the instance is not shared between multiple thread so it's developed differently than :
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single, ConcurrencyMode=ConcurrencyMode.Multiple)]
In this case the service implementation should take care about concurency problems.
The implementation is coupled with the attribute ServiceBehavior, so moving this behavior in a XML file is not a good idea.
What if you can change a InstanceContextMode.PerCall service to a InstanceContextMode.Single service inside the config file ? You break the application !
Looking at how you mention XML and SQL, you are using WCF to build a web application or an actual web service (service on the Web, and not just SOAP exchange).
It helps thinking about WCF as a replacement for .NET Remoting (or DCOM, CORBA etc), which also happens to support web services as one of the transports. Interfaces declared in assemblies, behavior of proxies, certain configuration options and other aspects of the framework that look unnatural and complicated from perspective of web apps - actually do work out of the box for DCOM-style systems of distributed objects.
To answer the question: no, you are not missing anything and using WCF for web applications is complicated, because WCF is not a framework for building web applications. Probably such framework can be built on top of it, but I would hate to see WCF itself changed to move into web realm.

Categories