I am creating a Xamarin Android application and I want to use DocumentDB, but unfortunately it's impossible to use Microsoft.Azure.DocumentDB NuGet package, and it seems the only way is to use REST.
How can I use REST for receiving, updating, adding data to DocumentDB with C#?
There is a giant knowledge base about this. You should read it:
https://msdn.microsoft.com/en-us/library/azure/dn781481.aspx
They are also referencing a Github repository that contains examples on how to interact with that REST interface. It's really easy. Just start out by copying chunks of the code and adjust them to your needs:
https://github.com/Azure/azure-documentdb-dotnet/tree/master/samples/rest-from-.net
The code is well commented and explains all the steps in detail.
Related
As part of moving from WCF to gRPC I am dealing with NetDataContractSerializer which is used for serializing objects on client side and de-serializing on server side. Both client and server are sharing same DLL with types used in communication.
As part of client app update process actual version of shared DLL with new/changed/deleted definitions of communication objects is downloaded from server. The basic communications objects used for update process are never changed. So serialization/deserialization during update works.
I would like to rewrite existing code as little as possible. I found out that I could replace NetDataContractSerializer by Newtonsoft's Json.NET serialization as described here:
How to deserialize JSON to objects of the correct type, without having to define the type before hand? and here https://www.newtonsoft.com/json/help/html/SerializeTypeNameHandling.htm.
But I wonder if:
Is there better solution in general?
Is there some solution based on what is part of .NET framework 4.8 and will be also working in .NET 5.0 without need to reference third-party DLL?
Is there some binary-serialization alternative which would be more message-size friendly / faster? It is not mandatory for me to have sent messages in readable form.
On "3", gRPC is actually very open to you swapping out the serializer; you are not bound to protobuf, but gRPC is usually used with protobuf. In fact, you could actually use NetDataContractSerializer, although for reasons I'll come onto: I wouldn't recommend it.
The "how" for this is hard to explain, because often with gRPC people use protoc to generate all the bindings, which hides all the details (and ties you to protobuf).
You might be interested in protobuf-net.Grpc here, which is an alternative way of binding to gRPC (using the Google or Microsoft transports - it is just the bindings that are different), and which is much more comparable to WCF. In fact, it even allows you to borrow WCF's interface/attribute approach, although it doesn't give you like-for-like feature parity with WCF (it is still fundamentally gRPC!).
For how, a getting started guide is here. The opening line sets the context:
What is it?
Simple gRPC access in .NET Core 3+ and .NET Framework 4.6.1+ - think WCF, but over gRPC
It defaults to protobuf-net, which is an alternative protobuf serializer designed for code-first scenarios, but you can replace the serializer (globally, or for individual types). An example of implementing a custom serializer binding is provided here - note that most of that file is a large comment (the actual serializer code is 8-ish lines at the end). Please read those comments: they're notionally about BinaryFormatter, but every word of them applies equally to NetDataContractSerializer.
I realise you said "without need to reference third-party DLL" - in which case, I mean sure: you could effectively spend a few weeks replicating the most immediately obvious things that protobuf-net.Grpc is doing for you, but ... that doesn't sound like a great use of your time if the NuGet package is simply sitting there ready to use. The relevant APIs are readily available to use with the Google/Microsoft packages, but there is quite a lot of plumbing involved in making everything work together.
Currently working with a client that has a Web Api Project/Framework that they use for multiple clients. 98% of the code is reused, but they copy and paste the repository for each new client. After the copy and paste the only things that really change are Web.Configs and every now and then a couple extensions to the OOTB api. E.g. maybe they standup a custom module to the api api/rockets/ or they extend an existing api and add some new methods & actions.
I can't find any way to pull this off with .net. Currently I'm thinking I could solve this via git with forks, but I was wondering if there was any way to solve this with .net. Is there a way to extend an existing web project?
The git approach is one way of doing it, but I'd probably go for Nuget packages.
Extract everything that will be common to all solutions, even resources and make a package.
Take advantage of package versioning and so on. If you got a bug, fix that in the package and simply run a nuget-update in the project, or even just setup your continuous integration to rebuild and update at any dependencies change.
One option, would be to have a single web project for multiple clients that uses "Areas". That way you could turn on/off each are individually.
You could also put your common business logic into a Nuget package and import it for each customer. But it would be a really bad idea to fork the business logic every time. What would happen if you found a defect? You'd be forced to fix the same problem in N projects.
The approach from this one was really simple. We extracted everything into a common C# Library, converted all of the shared stuff to git submodules. We then used Autofac Multitenant to register some client specific overrides. It was actually really easy.
Summary and Question
I'm looking to generate code in C# to prevent significant repetition and wrap the Google APIs in a way like they do themselves, as stated on their .Net Client library page. Edit: Their generator is written in Python, apparently. I will continue to investigate other .Net options.
Where should I focus my attention, CodeDOM, Roslyn or something else? Should I not be considering Code Generation at all - and if so, what alternative track should I take to properly handle this situation?
Details
I am working on writing a wrapper for the Google .Net APIs to make a Google API library for PowerShell (for any and all Google APIs). I already have it working on three of the APIs, but since my project handles all of the authentication (and storage thereof) and other things like pagination, I have to basically wrap each API method call to work with my own authentication so that the user doesn't have to worry about it. This leads to a lot of repetitious coding encapsulating methods that already exist in the .Net Libraries:
public Data.Asp Get(string userKey, int codeId)
{
//I have to wrap their get method with my own using GetService(), for example
return GetService().Asps.Get(userKey, codeId).Execute();
}
Since this is all patterned on information that exists either through the Google Discovery API or through the underlying client libraries, I feel like there should be some way to generate the code and save my hands some trouble.
Some Background and Related Info
On the main page for the Google API .Net Client libraries it is stated:
The source code for the individual Google APIs is programmatically generated using the Discovery API.
I would like to do something similar, though I have no idea where to focus my time and research. I've looked up CodeDOM (and the inherent limitations), Roslyn as well as some differences between the two. I've also checked out the T4 Text Templates for Visual Studio.
To be clear, I am not looking to generate code at runtime as I would with something like Reflection, I am looking to generate bits of a library - though I'm not sure if I am looking for active or passive generation yet.
I work at Google on the .NET client libraries (among other things). Your question is pretty far reaching, but here is the general idea:
The metadata for describing "most" Google APIs is through a discovery document. That describes the methods and types the API has.
Client libraries for accessing Google's APIs then are generated, like you point out, from a Python library. (Using Django as a templating language, specifically.)
Once the code is generated for each Google API, we invoke MSBuild, package the binaries, and deploy them to NuGet.
As for your specific question about how to generate code, I would recommend you build two separate components. The first is something that will read and parse the discovery document, the second is the component that will emit the code.
For the actual code gen, here are some personal opinions:
The simplest thing to do would be to use a text-based templating language. (e.g. Django or just write your own.)
CodeDOM is an interesting choice, but probably much more difficult to use than you want. It is how Visual Studio does some of its codegen, e.g. you describe the code and CodeDOM will emit C#, VB, MC++ to match your desires. However, since you are only focusing on C#, the benefit of CodeDOM supporting multiple languages isn't useful.
Roslyn certainly is a cool, new technology, but that probably won't be of much use. I believe Roslyn has the ability to dynamically model code and round-trip the AST to disk. But that is probably overkill, since you aren't trying to build a general-purpose C# codegen solution, and instead just target generating code that matches the API discovery document.
So I would suggest a basic text-based solution for now, and see how far that can get you. If you have any other questions feel free to message me or log an issue on the GitHub issue tracker.
I have an online service for which I provide a RESTful API. This API is pretty neat and complete, but my clients would like to access it through an SDK. Now, my clients all have different needs in terms of languages: Go, Python, C#, you name it.
However, being lazy, I notice that the abstraction stays the same, and I have the same functions everywhere. Is there a way to automatize code generation for all of these SDKs, provided the design model is nice and clean? Would UML be useful for example? Or would I only need to create a C libraty matching the API calls and then use some SWIG magic to generate the bindings?
Technologically speaking, I use the Django Rest Framework for the API side, but that should not influence the question.
Of course you can use UML to document your REST API. As in REST it is all about resources and their CRUD methods, I would suggest a restrictive class diagram as a base of this documentation.
Here is an example with some ideas:
From here it is also easy to make an exporter and generate client APIs in any technology. Some UML parsing and selective generation. It's probably kind of time consuming, especially for the newbies, but relativelly straightforward.
However, this neat visual API-spec is already a great input for API-client developers.
UPDATE (after comments)
There are a lot of ways how you can do it in UML, depending on the concrete requirements.
My first idea is to create another package of classes (with stereotype REST-client) or so, that would be connected (via dependency) to corresponding methods thay can execute. Class's atts can be used to store additional info.
Alternatively you can use more illustrative approach and show rest-clients as UML actors. Here is how it looks like:
Note that these special elements (actors and rest-client classes) should be clearly separated in another package in the model and not mandatory displayed on the same diagram with resources. Traceability matrix (supported by some UML tools) is probably much better choice to specify this kind of supplementary information.
If you need more info, please tell me how exactly would you like to handle authentication and permissions.
I'm trying to use MongoDB with my POCOs. Using mongodb-csharp library (http://github.com/samus/mongodb-csharp), I've got everything working, but I have to have my Ids set to OIds which requires me to reference the mongodb-csharp library from within my entities assembly. This doesn't seem right. I've searched online but I can't seem to find anyone who is abstracting out the OId so it could be easily replaced. Does someone have some guidance on this?
Thanks,
Dan
I agree that this isn't right, but if you want to use Oids, then this is what has to happen. I'm one of devs on the mongodb-csharp driver, and I personally never use Oids. I will always use Guids. While it takes more space, I believe it is worth that for transportability between different data stores (I use MSSQL for transactional processing) as well as keeping my dependencies transparent.