C# OAuth 2.0 Library - How to Implement the Domain Model - c#

OAuth 2.0 specs are getting more and more stable (http://tools.ietf.org/html/draft-ietf-oauth-v2) and I will be implementing C# OAuth 2.0 library for an internal project. I would like to hear opinions on how to implement a clear domain for the library. Major points of concern would be:
How to name the classes, should every or most keywords in the specs describing concrete concepts be made into separate classes?
How to name the namespaces, should every major topic discussed in the specification be made into a separate namespace (authentication, server, client, security, etc.)
How should the server and client resources be modeled (as properties inside classes, or as inner classes)
And many more I'll be listing as they come up...
Anybody with real experience in creating libs out of specifications (like the numerous IETF specs) would be of monumental help. It would also be very helpful to point out libs with excellent spec implementations, which can act as a guide.
Edit: Checked out DotNetOAuth CTP but obviously they don't provide a clean model as to be inspired of.

You are probably on the right track. In general, names for classes and attributes should largely follow the spec, and you should include links to the specification within the XML documentation. By matching the names, a person familiar with the standard can more easily understand what the code is doing.
I would heavily recommend including unit tests for the complete project. Not only will this help you maintain the integrity of each build, but they will expose areas that are not as usable as they should be. For example, if you find yourself having to use a convoluted mess of classes and methods to simply request authentication for something, then you need to refactor it to be easier on the consumer of the library.
Basically, your priorities should be in this order:
Working code
Easy to Use
Documentation
Matching the Specification
Other than this, you have freedom to implement it to your personal preference. You may notice that there are some domains that have tons of different libraries that accomplish the same thing in different ways. This is a good thing, because different people like different things. Some people will want a library that mirrors a specification, while others will want to use one with good documentation that might be difficult to use. Others just want something that will work with a few lines of code and stay out of their way. It largely depends on your beliefs on the matter. You can't please them all, but just choose a path and run with it.
That said, I would recommend against excessive namespacing. It's much easier for people to do an include MyOpenAuth rather than include 3 different namespaces. Use them where it seems logical, but in general, the concept of Open Authentication can be considered it's own subject domain (under the umbrella of a single namespace). But, it's up to you.

Related

Is there a way to jointly create a RESTful API and one or several SDKs

I have an online service for which I provide a RESTful API. This API is pretty neat and complete, but my clients would like to access it through an SDK. Now, my clients all have different needs in terms of languages: Go, Python, C#, you name it.
However, being lazy, I notice that the abstraction stays the same, and I have the same functions everywhere. Is there a way to automatize code generation for all of these SDKs, provided the design model is nice and clean? Would UML be useful for example? Or would I only need to create a C libraty matching the API calls and then use some SWIG magic to generate the bindings?
Technologically speaking, I use the Django Rest Framework for the API side, but that should not influence the question.
Of course you can use UML to document your REST API. As in REST it is all about resources and their CRUD methods, I would suggest a restrictive class diagram as a base of this documentation.
Here is an example with some ideas:
From here it is also easy to make an exporter and generate client APIs in any technology. Some UML parsing and selective generation. It's probably kind of time consuming, especially for the newbies, but relativelly straightforward.
However, this neat visual API-spec is already a great input for API-client developers.
UPDATE (after comments)
There are a lot of ways how you can do it in UML, depending on the concrete requirements.
My first idea is to create another package of classes (with stereotype REST-client) or so, that would be connected (via dependency) to corresponding methods thay can execute. Class's atts can be used to store additional info.
Alternatively you can use more illustrative approach and show rest-clients as UML actors. Here is how it looks like:
Note that these special elements (actors and rest-client classes) should be clearly separated in another package in the model and not mandatory displayed on the same diagram with resources. Traceability matrix (supported by some UML tools) is probably much better choice to specify this kind of supplementary information.
If you need more info, please tell me how exactly would you like to handle authentication and permissions.

How to share business concepts across different programming languages?

We develop a distributed system built from components implemented in different programming languages (C++, C# and Python) and communicating one with another across a network.
All the components in the system operate with the same business concepts and communicate one with another also in terms of these concepts.
As a results we heavily struggle with the following two challenges:
Keeping the representation of our business concepts in these three languages in sync
Serialization / deserialization of our business concepts across these languages
A naive solution for this problem would be just to define the same data structures (and the serialization code) three times (for C++, C# and Python).
Unfortunately, this solution has serious drawbacks:
It creates a lot of “code duplication”
It requires a huge amount of cross-language integration tests to keep everything in sync
Another solution we considered is based on the frameworks like ProtoBufs or Thrift. These frameworks have an internal language, in which the business concepts are defined, and then the representation of these concepts in C++, C# and Python (together with the serialization logic) is auto-generated by these frameworks.
While this solution doesn’t have the above problems, it has another drawback: the code generated by these frameworks couples together the data structures representing the underlying business concepts and the code needed to serialize/deserialize these data-structures.
We feel that this pollutes our code base – any code in our system that uses these auto-generated classes is now “familiar” with this serialization/deserialization logic (a serious abstraction leak).
We can work around it by wrapping the auto-generated code by our classes / interfaces, but this returns us back to the drawbacks of the naive solution.
Can anyone recommend a solution that gets around the described problems?
Lev, you may want to look at ICE. It provides object-oriented IDL with mapping to all the languages you use (C++, Python, .NET (all .NET languages, not just C# as far as I understand)). Although ICE is a middle-ware framework, you don't have to follow all its policies.
Specifically in your situation you may want to define the interfaces of your components in ICE IDL and maintain them as part of the code. You can then generate code as part of your build routine and work from there. Or you can use more of the power that ICE gives you.
ICE support C++ STL data structures and it supports inheritance, hence it should give you sufficiently powerful formalism to build your system gradually over time with good degree of maintainability.
Well, once upon a time MS tried to solve this with IDL. Well, actually it tried to solve a bit more than defining data structures, but, anyway, that's all in the past because no one in their right mind would go the COM route these days.
One option to look at is SWIG which is supposed to be able to port data structures as well as actual invocation across languages. I haven't done this myself but there's a chance it won't couple the serialization and data-structures so tightly as protobufs.
However, you should really consider whether the aforementioned coupling is such a bad thing after all. What would be the ideal solution for you? Supposedly it's something that does two things: it generates compatible data structures across multiple languages based on one definition and it also provides the serialization code to stitch them together - but in a separate abstraction layer. The idea being that if one day you decide to use a different serialization method you could just switch out that layer without having to redefine all your data structures. So consider that - how realistic is it really to expect to some day switch out only the serialization code without touching the interfaces at all? In most cases the serialization format is the most permanent design choice, since you usually have issues with backwards compatibility, etc. - so how much are you willing to pay right now in development cost in order to be able to theoretically pull that off in the future?
Now let's assume for a second that such a tool exists which separates data structure generation from serialization. And lets say that after 2 years you decide you need a completely different serialization method. Unless this tool also supports plugable serialization formats you would need to develop that layer anyway in order to stitch your existing structures to the new serialization solution - and that's about as much work as just choosing a new package altogether. So the only real viable solution that would answer your requirements is something that not only support data type definition and code generation across all your languages, and not only be serialization agnostic, but would also have ready made implementation of that future serialization format you would want to switch to - because if it's only agnostic to the serialization format it means you'd still have the task of implementing it on your own - in all languages - which isn't really less work than redefining some data structures.
So my point is that there's a reason serialization and data type definition so often go together - it's simply the most common use case. I would take a long look at what exactly you wish to be able to achieve using the abstraction level you require, think of how much work developing such a solution would entail and if it's worth it. I'm certain that are tools that do this, btw - just probably the expensive proprietary kind that cost $10k per license - the same argument applies there in my opinion - it's probably just over engineering.
All the components in the system operate with the same business concepts and communicate
one with another also in terms of these concepts.
When I got you right, you have split up your system in different parts communicating by well-defined interfaces. But your interfaces share data structures you call "business concepts" (hard to understand without seeing an example), and since those interfaces have to build for all of your three languages, you have problems keeping them "in-sync".
When keeping interfaces in sync gets a problem, then it seems obvious that your interfaces are too broad. There are different possible reasons for that, with different solutions.
Possible Reason 1 - you overgeneralized your interface concept. If that's the case, redesign here: throw generalization over board and create interfaces which are only as broad as they have to be.
Possible reason 2: parts written in different languages are not dealing with separate business cases, you may have a "horizontal" partition between them, but not a vertical. If that's the case, you cannot avoid the broadness of your interfaces.
Code generation may be the right approach here if reason 2 is your problem. If existing code generators don't suffer your needs, why don't you just write your own? Define the interfaces for example as classes in C#, introduce some meta attributes and use reflection in your code generator to extract the information again when generating the according C++, Python and also the "real-to-be-used" C# code. If you need different variants with or without serialization, generate them too. A working generator should not be more effort than a couple of days (YMMV depending on your requirements).
I agree with Tristan Reid (wrapping the business logic).
Actually, some months ago I faced the same problem, and then I incidentally discovered the book "The Art Of Unix Programming" (freely available online). What grabbed my attention was the philosophy of separating policy from mechanism (i.e. interfaces from engines). Modern programming environments such as the NET platform try to integrate everything under a single domain. In those days I was asked for developing a WEB application that had to satisfy the following requirements:
It had to be easily adapted to future trends of User Interfaces without having to change the core algorithms.
It had to be accessible by means of different interfaces: web, command line and desktop GUI.
It had to run on Windows and Linux.
I bet for developing the mechanism (engines) completely in C/C++ and using native OS libraries (POSIX or WinAPI) and good open source libraries (postgresql, xml, etc...). I developed the engine modules as command-line programs and I eventually implemented 2 interfaces: web (with PHP+JQuery framework) and desktop (NET framework). Both interfaces had nothing to do with the mechanisms: they simply launched the core modules executables by calling functions such as CreateProcess() in Windows, or fork() in UNIX, and used pipes to monitor their processes.
I'm not saying UNIX Programming Philosophy is good for all purposes, but I am applying it from then with good results and maybe it will work for you too. Choose a language for implementing the mechanism and then use another that makes interface design easy.
You can wrap your business logic as a web service and call it from all three languages - just a single implementation.
You could model these data structures using tools like a UML modeler (Enterprise Architect comes to mind as it can generate code for all 3.) and then generate code for each language directly from the model.
Though I would look closely at a previous comment about using XSD.
I would accomplish that by using some kind of meta-information about your domain entities (either XML or DSL, depending on complexity) and then go for code generation for each language. That would reduce (manual) code duplication.

What is the best way expose key classes/methods my core API to 3rd party developers?

I have an application that I have designed and this app has a pretty decent core dll that contains an API that my main view's exe uses. I would like to allow other developers to access this core dll as well but I don't want them to have as much access as me since it would be a security risk. What is the standard way of exposing my core dll? Are there any particular design patterns I should be looking at?
I'm using C#
Edit: my question was a little vague so here is some clarification
My program is deployed as a windows exe which references the core.dll. I want other people to create extensions which dynamically get loaded into my program at start up by loading dlls in the /extensions directory. The 3rd party dlls will inherit/implement certain classes/interfaces in my core.dll. I only want to give 3rd parties limited access to my core but I want to give my exe additional access to the core.
I should mention that this is the first time I have written a program that imports DLLs. Perhaps this whole method of allowing users to add extensions is wrong.
How do I modify/expose my API for
other developers?
To deliberately allow other developers to work with an API you've built touches on many things, which can be broken into two areas:
Resources (documentation, samples, etc) that makes it easier for them to understand (yes - basically an SDK).
Architecting, constructing and deploying your solution so that it's easy to actually work with.
Examples include:
By packing it in a way that suits re-use.
By using naming conventions and member names that others can easily follow.
Documentation, samples.
Providing the source code (as open source) if you're happy for them to modify it.
I would like to allow other developers
to access this core dll as well but I
don't want them to have as much access
as me since it would be a security
risk.
Ok, so this gets us right into the second area - the actual solution.
The problem you have is not a trivial one - but it's also quite do-able; I'd suggest:
Looking into existing material on plugins (https://stackoverflow.com/questions/tagged/plugins+.net)
Personally, I've found using attributes and Dependency Inversion to be a great approach.
There's also stuff like the Managed Extensibility Framework which you should consider.
The big issue you face is that you're into serious architecture territory - the decisions you make now will have a profound impact on all aspects of the solution over time. So you might not be able to make an informed decision quickly. Still - you have to start somewhere :)
The "design patterns" in terms of an API are more related to things like REST.
I don't want them to have as much
access as me since it would be a
security risk
Then i would (for the sake of maintenance), layer on top of the core DLL extra logic to prevent this.
The thing is, the "clients" call the API, not the Core DLL.
"How" the API accesses the Core DLL is under your full control. Just only expose operation contracts that you wish.
Since you're using C#, I would look at Microsoft's Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries and use FxCop to in-force many of them (latest version here). This won't be all you'll likely need, but it would help put you in the right direction.
Also, take a look at the freely available distillation of Framework Design Guidelines by the same author.

.NET security mechanism to restrict access between two Types in the same project?

Question
Is there a mechanism in the .NET Framework to hide one custom Type from another without using separate projects/assemblies? I'm not talking about access modifiers to hide members of a Type from another type - I mean to hide the Type itself.
Background
I'm working in an ASP.NET Website project and the team has decided not to use separate project assemblies for different software layers. Therefore I'm looking for a way to have, for example, a DataAccess/ folder of which I disallow its classes to access other Types in the same ASP.NET Website project. In other words I want to fake the layers and have some kind of security mechanism around each layer to prevent it from accessing another.
More Info and Details ...
Obviously there's not a way to enforce this restriction using language-specific OO keywords so I am looking for something else, for example: maybe a permission framework or code access mechanism, maybe something that uses meta data like Attributes. Even something that restricts one namespace from accessing another. I'm unsure the final form it might take.
If this were C++ I'd likely be using friend to make as solution, which doesn't translate to C# internal in this case although they're often compared.
I don't really care whether the solution actually hides Types from each other or just makes them inaccessible; however I don't want to lock down one Type from all others, another reason access modifiers are not a solution. A runtime or design time answer will suffice. Looking for something easy to implement otherwise it's not worth the effort ...
You could use NDepend to do this:
http://www.ndepend.com/
NDepend could allow you to enforce "layering" rules by specifying that certain namespaces should not reference each other. You then plug NDepend and the ruleset into your automated build, and it will fail the build (with a full report) if there are any misdemeanours.
In this way you can enforce logical software layering concepts within an assembly without having to use project structures to do it physically.
Update
I answered the question late last night, and rather literally i.e. how you can directly solve the question. Although a tool can be used to solve the issue, developing in one project across the whole team is more than likely going to be a pretty miserable experience as the project grows:
Unless people are incredibly disciplined, the build will keep breaking on layering violations.
There will be source control merge thrashing on the VS project file - not pleasant.
Your unit of re-use is very large and undefined if you want to share assemblies with other applications\projects you are developing. This could lead to very undesired coupling.
Although I do not advocate having lots of tiny assemblies, a sensible number defined around core concepts is very workable and desirable e.g. "UI", "data access", "business logic", "common library" and "shared types".
Nothing out of the box; there may be some 3rd-party tools that you can use to kludge some rules together, based perhaps on namespaces etc. Something like a custom fx cop rule...

Writing API in C# for My Application

I'll write a application but I've never experienced to allow people to use my application programming interface before.I mean how kinda design I should make to let people use my methods from outside world like API.
Please some one show me a way.I am kinda new to this.
Expose as little as you can. Every bit you publish, will return to you x100 in next version. Keeping compatibility is very hard.
Create abstractions for everything you publish. You will definitely change your internals, but your existing users should stay compatible.
Mark everything as internal. Even the main method of your application. Every single method that could be used, will be used.
Test your public API the same way you would for interfaces. Integration tests and so on. Note that your API will be used in unpredictable ways.
Maximize convention over configuration. This is required. Even if your API is a single method you will still need to support this. Just makes your life easier.
Sign, and strong name your assemblies, this is good practice.
Resolve as many FxCop and StyleCop errors as possible.
Check your API is compatible with the Naming Guidelines of your platform.
Provide as many examples as you can, and remember that most of the usage of your API will be Ctrl+C and Ctrl+V from these examples.
Try to provide documentation. Check that you do not have GhostDoc-style auto-generated documentation. Everybody hates this.
Include information on how to find you.
Do not bother with obfuscation. This will help you and your users.
ADDED
API should have as less dependencies as you can. For example, dependecies on the IoC containers should be prohibited. If your code uses it internally. Just ilmerge them into your assemblies.
It may not be the funniest reading, and certainly not the only reading to do on the subject, but while designing your class library (your API), do check in with the Design Guidelines for Developing Class Libraries every now and then, it's a good idea to have a design that corresponds a bit with the .NET Framework iteself.
Make your methods you want to expose to the outside world public.
I found this presentation to be particularly insightful:
How to Design a Good API and Why it Matters
http://lcsd05.cs.tamu.edu/slides/keynote.pdf
One way to do it is to create a DLL for your main functionality that others will use and an EXE that calls the methods in the DLL. If you want your application to support plug-ins, have a look at the System.AddIn namespace.
If you want to see what's new in this area, check out the Managed Extensibility Framework. It's a new/"unified (see the comments...)" method for exposing features for add-ins and other extensibility/modularity.

Categories