As part of moving from WCF to gRPC I am dealing with NetDataContractSerializer which is used for serializing objects on client side and de-serializing on server side. Both client and server are sharing same DLL with types used in communication.
As part of client app update process actual version of shared DLL with new/changed/deleted definitions of communication objects is downloaded from server. The basic communications objects used for update process are never changed. So serialization/deserialization during update works.
I would like to rewrite existing code as little as possible. I found out that I could replace NetDataContractSerializer by Newtonsoft's Json.NET serialization as described here:
How to deserialize JSON to objects of the correct type, without having to define the type before hand? and here https://www.newtonsoft.com/json/help/html/SerializeTypeNameHandling.htm.
But I wonder if:
Is there better solution in general?
Is there some solution based on what is part of .NET framework 4.8 and will be also working in .NET 5.0 without need to reference third-party DLL?
Is there some binary-serialization alternative which would be more message-size friendly / faster? It is not mandatory for me to have sent messages in readable form.
On "3", gRPC is actually very open to you swapping out the serializer; you are not bound to protobuf, but gRPC is usually used with protobuf. In fact, you could actually use NetDataContractSerializer, although for reasons I'll come onto: I wouldn't recommend it.
The "how" for this is hard to explain, because often with gRPC people use protoc to generate all the bindings, which hides all the details (and ties you to protobuf).
You might be interested in protobuf-net.Grpc here, which is an alternative way of binding to gRPC (using the Google or Microsoft transports - it is just the bindings that are different), and which is much more comparable to WCF. In fact, it even allows you to borrow WCF's interface/attribute approach, although it doesn't give you like-for-like feature parity with WCF (it is still fundamentally gRPC!).
For how, a getting started guide is here. The opening line sets the context:
What is it?
Simple gRPC access in .NET Core 3+ and .NET Framework 4.6.1+ - think WCF, but over gRPC
It defaults to protobuf-net, which is an alternative protobuf serializer designed for code-first scenarios, but you can replace the serializer (globally, or for individual types). An example of implementing a custom serializer binding is provided here - note that most of that file is a large comment (the actual serializer code is 8-ish lines at the end). Please read those comments: they're notionally about BinaryFormatter, but every word of them applies equally to NetDataContractSerializer.
I realise you said "without need to reference third-party DLL" - in which case, I mean sure: you could effectively spend a few weeks replicating the most immediately obvious things that protobuf-net.Grpc is doing for you, but ... that doesn't sound like a great use of your time if the NuGet package is simply sitting there ready to use. The relevant APIs are readily available to use with the Google/Microsoft packages, but there is quite a lot of plumbing involved in making everything work together.
Related
I am considering replacing a .NET WFC duplex endpoint with gRPC. Like most frameworks, WCF allows for the data to just be simple contract objects so what you use over the wire is what you can use in your processing code (if you are ok with that coupling). But with gRPC and GPB, it looks like I can't do that and I have 2 options. One is to translate my existing .NET objects on both ends of the communication, which will add extra labor/complexity. The other is to use the protocol buffer messages verbatim in business code, which couples business code to transport technology.
So my question is .. what is the best solution to use gRPC and avoid translation or direct use of buffers in business code?
Both can be valid options: copy or use directly.
In larger/deeper systems it's good to translate to some "internal" objects that can have more fields and morph to the system without breaking clients. Those "internal" objects could even be protobuf messages. In this case, the duplication is a feature.
In smaller/shallow systems it's easy to directly use the protocol buffers without copying. You should realize that one day you may need to convert to some other version of the protos or make some sort of POJO or similar. But it's also possible that day never comes.
So the question isn't really, "Is it okay to use protocol buffers in business code," as that easily has few issues. But really the question is, "Is it worthwhile to allow the system internals to develop separately from its API?"
I need to serialize an object using the BinaryFormatter with .NET 4.0 and send it across the wire (via SOAP as a byte array) to a web service running under .NET 3.5. And vice versa. I've tested this scenario, and it seems to work fine.
There is one old question regarding this scenario on SO, that talks about .NET 1.x to 2.0, which did not leave me with a lot of confidence about the approach.
So it works in my test harness, but I can't test every possible variation of the object, so I need some theoretical underpinnings.
As a rule, can objects serialize/deserialize across different framework versions? Is this an accepted scenario or a hack that worked in my case?
If by "binary" you mean BinaryFormatter, then it is already hugely intolerant between versions, since it is strictly tied to type metadata (unless you work really hard with custom bindings). As such, it is only strictly reliable when both ends are using exactly the same implementations. Even changing a property to/from an automatically implemented property is a breaking change.
This isn't a failing of "binary", but a feature of BinaryFormatter. Other binary serializers don't have this issue. For example, protobuf-net works between OS, between frameworks, etc - since the format a: doesn't care about your specific types, and b: is fixed to a published spec.
If you are using BinaryFormatter for this currently: then IMO yes, you should explicitly test every API. Since any type could change an implementation detail. And unfortunately since BF has a habit of pulling in unexpected data (via events, etc), even this isn't necessarily enough to validate the real usage.
If the serialization format is XML (SOAP) or JSON it should absolutely work no problem. I am unsure of how a binary serialized object would react.
The biggest issue with serialization is when you have primitives that do not exist. Hell, the problem exists when going to certain types in native code, so it is not a unique problem found in services (assumption).
As a "rule", you can serialize across framework versions and even to clients written in Java, Delphi and COBOL (provided a version with web service ability - and provided you have exposed the serialized objects appropriately through a service endpoint).
I am trying to think if there are any primitives in .NET that were not present in 1.x, as they would be problematic. As would any new framework objects you might try to serialize. You have a lot less danger with 2.0 (perhaps non-existent?)
The more "open" your serialization is (ie, standards like JSON, SOAP, etc - simplified: JSON or XML, at least in most cases), the less likely you are to have issues. And, if you have issues, you can code around the automagic proxies, etc. As you move towards binary, you can have some incompatibility between an object serialized in 4.0 with WCF and a Remoting client.
I'm working with some .NET services that have the potential to process significantly large XML documents, and I need to ensure that all processing is done in a streaming / pipelining fashion. I'm already using the XmlReader and XmlWriter classes. My question is, what is the best way to programmatically provide a filter into the reader and writer (either, depending upon the flow)?
(I am not looking for XSLT. I already do a lot with XSLT, and many of the things I'm looking to do are outside the scope of XSLT - or at least, implementing within XSLT would not be ideal.)
In Java & SAX, this would best be handled through a XMLFilterImpl. I do not see that .NET provides anything similar for working with a XmlReader. I did find this blog post, "On creating custom XmlReaders/XmlWriters in .NET 2.0, Part 2", which includes the following (I've fixed the first link from a broken link from the original post):
Here is the idea - have an utility wrapper class, which wraps
XmlReader/XmlWriter and does nothing else. Then derive from this class
and override methods you are interested in. These utility wrappers are
called XmlWrapingReader and XmlWrapingWriter. They are part of
System.Xml namespace, but unfortunately they are internal ones -
Microsoft XML team has considered making them public, but in the
Whidbey release rush decided to postpone this issue. Ok, happily these
classes being pure wrappers have no logic whatsoever so anybody who
needs them can indeed create them in a 10 minutes. But to save you
that 10 minutes I post these wrappers here. I will include
XmlWrapingReader and XmlWrapingWriter into the next Mvp.Xml library
release.
These 2 classes (XmlWrappingReader and XmlWrappingWriter) from the Mvp.Xml library are currently meeting my needs nicely. (As an added-bonus, it is a free & open-source library, BSD licensed.) However, due to the stale status of this project, I do have some concerns with including these classes in a contracted, commercial development project that will be handed-off. The last release of Mvp.Xml was 4.5 years ago in July 2007. Additionally, there is this comment from a "project coordinator" in response to this project discussion:
Anyway, this is not really a supported project anymore. All devs moved
out. But it's open source, you are on your own.
I've also found SAX equivalent in .Net, but SAXDotNet doesn't seem to be in any better shape - with its last release being in 2006.
I'm well aware that a stale project doesn't necessarily mean that it is any less useable, and will be moving forward with the 2 wrapper classes from the Mvp.Xml library - at least for now.
Are there any better alternatives that I should be considering? (Again, any solution must not require the entire XML to exist in-memory at any one time - whether as a DOM, a string, or otherwise.) Are there any other libraries available (preferably something from a more active project), or maybe something within the LINQ features that would meet these requirements?
Personally I find that writing a pipeline of filters works much better with a push model than a pull model, although both are possible. With a pull model, a filter that needs to generate multiple output events in response to a single input event is quite tricky to program, though of course it can be done by keeping track of the state. So I think that looking for a SAX-like approach makes sense.
I would look again at SaxDotNet or equivalents. Be prepared to look at the source code and bend it to your needs; consider contributing back your improvements. Intrinsically the job it is doing is very simple: a loop that reads events from the (pull) input and writes events to the (push) output. In fact, it's so simple that perhaps the reason it hasn't changed since 2006 is that it doesn't need to.
I want to separate modules of my program to communicate with each other. They could be on the same computer, but possibly on different ones.
I was considering 2 methods:
create a class with all details. Send it of to the communication layer. This one serializes it, sends it, the other side deserializes it back to the class and than handles it further.
Create a hashtable (key/value thing). Put all data in it. Send it of to the communicationlayer etc etc
So it boils down to hashtable vs class.
If I think 'loosely coupled', I favor hashtable. It's easy to have one module updated, include new extra params in the hastable, without updating the other side.
Then again with a class I get compile-time type checking, instead of runtime.
Has anyone tackled this previously and has suggestions about this?
Thanks!
edit:
I've awarded points to the answer which was most relevant to my original question, although it isn't the one which was upvoted the most
It sounds like you simply want to incorporate some IPC (Inter-Process Communication) into your system.
The best way of accomplishing this in .NET (3.0 onwards) is with the Windows Communication Foundation (WCF) - a generic framework developed by Microsoft for communication between programs in various different manners (transports) on a common basis.
Although I suspect you will probably want to use named pipes for the purposes of efficiency and robustness, there are a number of other transports available such as TCP and HTTP (see this MSDN article), not to mention a variety of serialisation formats from binary to XML to JSON.
One tends to hit this kind of problem in distributed systems design. It surfaces in Web Service (the WSDL defining the paramers and return types) Messaging systems where the formats of messages might be XML or some other well-defined format. The problem of controlling the coupling of client and server remains in all cases.
What happens with your hash table? Suppose your request contains "NAME" and "PHONE-NUMBER", and suddenly you realise that you need to differentiate "LANDLINE-NUMBER" and "CELL-NUMBER". If you just change the hash table entries to use new values, then your server needs changing at the same time. Suppose at this point you don't just have one client and one server, but are perhaps dealing with some kind of exchange or broker systems, many clients implemented by many teams, many servers implemented by many teams. Asking all of them to upgrade to a new message format at the same time is quite an undertaking.
Hence we tend to seek back-comptible solutions such as additive change, we preserve "PHONE-NUMBER" and add the new fields. The server now tolerates messages containg either old or new format.
Different distribution technologies have different in-built degrees of toleration for back-compatibility. When dealing with serialized classes can you deal with old and new versions? When dealing with WSDL, will the message parsers tolerate additive change.
I would follow the following though process:
1). Will you have a simple relationship between client and server, for example do you code and control both, are free to dictate their release cycles. If "no", then favour flexibility, use hash tables or XML.
2). Even if you are in control look at how easily your serialization framework supports versioning. It's likely that a strongly typed, serialized class interface will be easier to work with, providing you have a clear picture of what it's going to take to make a change to the interface.
You can use Sockets, Remoting, or WCF, eash has pros and cons.
But if the performance is not crucial you can use WCF and serialize and deserialize your classes, and for maximum performance I recommend sockets
What ever happened to the built in support for Remoting?
http://msdn.microsoft.com/en-us/library/aa185916.aspx
It works on TCP/IP or IPC if you want. Its quicker than WCF, and is pretty transparent to your code.
In our experience using WCF extensively over the last few years with various bindings we found WCF not be worth the hassle.
It is just to complicated to correctly use WCF including handling errors on channels correctly while retaining good performance (we gave up on high performance with wcf early on).
For authenticated client scenarios we switched to http rest (without wcf) and do json/protobuf payloading.
For high-speed non-authenticated scenarios (or at least non-kerberos authenticated scenarios) we are using zeromq and protobuf now.
I've been developing in MS technologies for longer than I care to remember at this stage. When .NET arrived on the scene I thought they hit the nail on the head and with each iteration and version I thought their technologies were getting stronger and stronger and looked forward to each release.
However, having had to work with WCF for the last year I must say I found the technology very difficult to work with and understand. Initially it's quite appealing but when you start getting into the guts of it, configuration is a nightmare, having to override behaviours for message sizes, number of objects contained in a messages, the complexity of the security model, disposing of proxies when faulted and finally moving back to defining interfaces in code rather than in XML.
It just does not work out of the box and I think it should. We found all of the above issues while either testing ourselves or else when our products were out on site.
I do understand the rationale behind it all, but surely they could have come up with simpler implementation mechanism.
I suppose what I'm asking is,
Am I looking at WCF the wrong way?
What strengths does it have over the
alternatives?
Under what circumstances should I
choose to use WCF?
OK Folks, Sorry about the delay in responding, work does have a nasty habit of get in the way sometimes :)
Some clarifications
My main paint point with WCF I suppose falls down into the following areas
While it does work out of the box, your left with some major surprises under the hood. As pointed out above basic things are restricted until they are overridden
Size of string than can be passed can't be over 8K
Number of objects that can be passed in a single message is restricted
Proxies not automatically recovering from failures
The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it.
Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembly, but it's full of rubbish and certain code generators choke on it.
I know the world moves on, I've moved on a number of times over the last (ahem 22 years I've been developing) and am actively using WCF, so don't get me wrong, I do understand what it's for and where it's heading.
I just think there should be simpler configuration/deployment options available, easier set-up and better management for configuration (SQL config provider maybe, rather than just the web.config/app.config files).
I use WCF all the time now and I share your pain. It seems like it was grossly over-engineered, but we are going to be stuck with it for a long, long time so I'm trying to learn it.
One thing I am certain about, XML sucks. I've had nothing but problems using XML to control it and have since switched to handling everything via code.
The concerns you listed were:
Size of string than can be passed can't be over 8K
Number of objects that can be passed in a single message is restricted
Proxies not automatically recovering from failures
The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it.
Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembly, but it's full of rubbish and certain code generators choke on it.
here's my take:
(1) addressed a valid concern that customers had with ASMX. It was too wide-open, with no way to easily control it. The 8k limit is easily lifted if you know where to look. I guess you can count that as a surprise, but it's more of a one-time thing. Once you know about it, you can lift it and be done with it forever, if you choose.
(2) is also configurable.
(3) is known, but there are boilerplate ways to work around this. The StockTrader code for example, demonstrates a proven pattern. You can re-use the code in your own app. Not sure if this is fixed in WCF for .NET 4.0. I know it was an open request.
(4) The config is a beast. This is a concern for a lot of people. The problem here is that WCF is so flexible, and config of all of that flexibility is exposed through xml files. It can be overwhelming. An approach that seems to work is to take it in small bites, as you need it.
(5) I don't understand.
I vastly prefer ASP.NET MVC and Web API over WCF. If I had to summarize WCF to a developer who was just being introduced to it, I would say, "WCF is a well-meaning attempt to replace over-engineered, Java EE style RPC development." Unfortunately, many of the decisions made require you to become an expert in configuring low level, unimportant items (message sizes, timeouts, uninteresting protocol elements, etc.) while abstracting absolutely critical pieces (URL design, parameter serialization, response serialization, etc.). The difference in productivity and aggravation between teams I know using WCF vs. Web API is night and day.
To come clean a little: I have always hated the core concept of .NET Remoting. I feel that developers need a thorough understanding of the resource structure of their application and how these resources are serialized. Furthermore, the use of the "POST" verb for simple data retrieval is worrisome in a read heavy application that needs to scale.
I'll address the rest of your issues after clarification. In the meantime, I can address your question on when you should choose to use WCF: always.
WCF is the replacement for the old ASMX technologies, including WSE. It is also the replacement for .NET Remoting. It is the only technology upon which high-level communications features in .NET will be based for the forseeable future.
For example, consider Windows Azure. It was not inevitable that the new concept of "cloud computing" would have its communications aspects covered by WCF. Yet, WCF was flexible enough to be extended to cover those cases, with very little change in code.
If you're having trouble with WCF, then you'd do well to make sure Microsoft knows about it. WCF is the present and future of web service and other service-oriented development in .NET, so they've got a very strong incentive to listen to you and resolve your pain points. Either contact them directly through Connect, or ask questions here on SO (tag with WCF, please), and a lot of people will help you.
Biggest advantage of using WCF from a programmer's point of view: separates the definition of exposed services (operations, contracts, etc.) from the protocol's specific details, unlike ASMX where you expose a class as a web service directly in the code using attributes. Using a real example of mine: we where able to easily switch the transport protocol between web services and named pipes, whatever suited better the deployment and performance needs, without changing a line of code.
WCF is intended to SOA methodologies. Work professionally using it is a nightmare. I delivered a SOA solution using WCF as tool and hell, hundreds configurations and hidden tips! My past distributed solution using old style Web Services and Remoting were more stable. I've spent days working out the solution for the error "The underlying connection was closed: An unexpected error occurred" which makes no sense to happen for one method among 4 in the same contract. I'm very disappointed. It took me back through time where .net was first introduced with lots of promises and when we got hands on, hell, log problems came up!
To address the problem of maintenance nightmare of application config, some standard like UDDI or WS-Discovery exist, WS-Discovery will be supported by WCF in .NET 4.0.
Keeping the configuration of the
interfaces in code rather than moving
to explicitly defined interfaces in
XML, which can be published and
consumed by almost anything. I know we
can export the XML from the assembley,
but it's full of rubbish and certain
code generators choke on it.
Can you be more explicit ? I think you are talking about service behavior configured in code.
You can easily code behavior extensions to configure what your are talking about in config file instead of code BUT I think that if microsoft didn't do that there is a good reason.
For example a service with this behavior :
[ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall, ConcurrencyMode=ConcurrencyMode.Single)]
The implementation knows that the instance is not shared between multiple thread so it's developed differently than :
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single, ConcurrencyMode=ConcurrencyMode.Multiple)]
In this case the service implementation should take care about concurency problems.
The implementation is coupled with the attribute ServiceBehavior, so moving this behavior in a XML file is not a good idea.
What if you can change a InstanceContextMode.PerCall service to a InstanceContextMode.Single service inside the config file ? You break the application !
Looking at how you mention XML and SQL, you are using WCF to build a web application or an actual web service (service on the Web, and not just SOAP exchange).
It helps thinking about WCF as a replacement for .NET Remoting (or DCOM, CORBA etc), which also happens to support web services as one of the transports. Interfaces declared in assemblies, behavior of proxies, certain configuration options and other aspects of the framework that look unnatural and complicated from perspective of web apps - actually do work out of the box for DCOM-style systems of distributed objects.
To answer the question: no, you are not missing anything and using WCF for web applications is complicated, because WCF is not a framework for building web applications. Probably such framework can be built on top of it, but I would hate to see WCF itself changed to move into web realm.