I am considering replacing a .NET WFC duplex endpoint with gRPC. Like most frameworks, WCF allows for the data to just be simple contract objects so what you use over the wire is what you can use in your processing code (if you are ok with that coupling). But with gRPC and GPB, it looks like I can't do that and I have 2 options. One is to translate my existing .NET objects on both ends of the communication, which will add extra labor/complexity. The other is to use the protocol buffer messages verbatim in business code, which couples business code to transport technology.
So my question is .. what is the best solution to use gRPC and avoid translation or direct use of buffers in business code?
Both can be valid options: copy or use directly.
In larger/deeper systems it's good to translate to some "internal" objects that can have more fields and morph to the system without breaking clients. Those "internal" objects could even be protobuf messages. In this case, the duplication is a feature.
In smaller/shallow systems it's easy to directly use the protocol buffers without copying. You should realize that one day you may need to convert to some other version of the protos or make some sort of POJO or similar. But it's also possible that day never comes.
So the question isn't really, "Is it okay to use protocol buffers in business code," as that easily has few issues. But really the question is, "Is it worthwhile to allow the system internals to develop separately from its API?"
Related
This question is a follow up on a previous asked question: link
I have to develop a Windows Service in .NET/C#. The service must be consumed by an application written in VB6. The service implements Quartz.NET to handle scheduled tasks and must implement listeners to be able to start functions that are initiated by the VB6 application.
The requirement for the VB6 team is to be able to use Winsock. Therefor I have to use sockets and listeners. The problem is that I have no experience with that, I'm more of a WCF guy.
I'm looking at the TcpListener class now to see if this will fit the requirement.
My first question: Will the VB6 team be able to consume the service if I implement the TcpListener class, keeping in mind the short comings of using VB6 (such as data structures and binary formatting)?
My second question: Let's say 3 functions must be available for the VB6 application, and 1 listener is created. What are the best practices for the implementation of this?
Once again: any advice is much appreciated!
Passing payloads as XML, textual Key/Value pairs, JSON, or other loose serialization formats can help future-proof your protocol. This allows you to use and even add new optional method arguments to the server end at will.
In addition to a "method" field you can have a "protocol version" field. When a breaking change is made (new required args, changes in existing arg types, etc.) you can use the version number to handle both old and new clients during a transition period - which might span years.
As far as providing message framing on top of the TCP stream goes you can use simple EOM delimiters of some sort or a message header containing a binary or text length prefix.
You might also consider Named Pipes. TransactNamedPipe() is fairly easy to call from a VB6 program. This takes care of the message-boundary issue for you - assuming you want an RPC-style protocol rather than async send/receive.
Or you might go all the way and use MSMQ. Private machine queues don't require extensive administration. But even though both VB6 and .Net have excellent MSMQ support most devs know little about it.
This question already has answers here:
Why do we need RESTful Web Services?
(8 answers)
Closed 9 years ago.
Why and when to use RESTful services?
I know how to create a WCF webservice. But I am not able to comprehend when to use a SOAP based service and when to use a RESTful service. I read many articles on SOAP vs REST, but still, I don't have a clear picture of why and when to use RESTful services.
What are some concrete points in order to easily decide between these services?
This is a worthy question, and one for which a short answer does no justice. Forgetting about the fact that most people may be more familiar with SOAP than REST, I think there are a few key points in this:
First and foremost, I would suggest using REST wherever it fits naturally. If your main use scenarios involve reading and updating data atoms ("resources"), REST provides a more lightweight, discoverable and straightforward approach to data access. Also, building really thin clients (mobile devices, JavaScript, even shell scripts) is often easier with REST.
For example: If your data model is all about customers and your main operations involve reading the customers and writing back changes, REST will do just fine. Using GET/POST/PUT/DELETE HTTP protocols is an excellent way to make the protocol very discoverable and easy to use, even for somebody not intimately familiar with your application.
This, however, brings us to the second point.
What if you need to offer a web API with querying capabilities? For example, a typical scenario might be "Get me the 5 newest customers". In this scenario, pure REST provides little in terms of API discoverability. Enter OData (www.odata.org), and you're rolling again; from this viewpoint, OData URI based query syntax adds a touch of well-known abstraction above the normal extremely simplistic, ID-based addressing of REST services.
But then, there are aspects which can be reasonably hard to represent in terms of REST. Point three: If you can't model it reasonably cleanly, consider SOA.
For example, if a common usage scenario involves transitioning customers between workflow stages (say, "new customer", "credit request received", "credit approved"), modeling such stages with REST may prove complex. Should different stages be represented just as an attribute value in an entity? Or perhaps, should the different stages be modeled as containers wherein the customers lie? If it's an attribute, do you always want to do a full PUT when updating it? Should you perhaps use a custom HTTP verb ("APPROVE http://mysite/customers/contoso HTTP/1.0")?
These are valid questions to which there are no universal answers. Everything can be modeled in REST, but at some point the abstraction breaks down so much that much of REST's human-facing benefits (discoverability, ease of understanding) are lost. Of course, technical benefits (like all the HTTP-level goodness) can still be reaped, but in most realities they're not really the critical argument anyway.
Fourth and finally, there are things which the SOA model simply does great. Perhaps the most important of these is transactions. While it's a pretty complex generic problem in the WS-* world as well, generic transactions are rarely needed and can often be replaced with reasonably simple, atomic operations.
For example, consider a scenario where you want to create an operation that allows the merging of two customers and all their purchases under one account. Of course, all this needs to happen or not happen; a typical transaction scenario. Modeling this in REST requires a nontrivial amount of effort. For a specialized scenario such as this, the easy SOA approach would be to create one operation (MergeCustomers) which implements the transaction internally.
For more advanced scenarios, the WS-* stack provides facilities not readily available in the REST world (including WS-Transaction, WS-Security and whatnot). While most APIs need none of this (or are better off implementing them in a more simple way), I don't think it's worth the effort to rewrite all that just to be 100% REST.
Look into the best of both worlds. For the vast majority of scenarios, it is completely acceptable to have the basic CRUD in REST and provide a few specialized operations in SOA.
Also, these APIs can be designed to act together. For example, what should a SOA-based MergeCustomers operation return? It might return a serialized copy of the merged customer, but in most cases I would opt for returning a URI of the REST resource that is the newly merged customer. This way, you would always have a single representation of the customer even if SOA were necessary for specialized scenarios.
The previous approach has the drawback that it requires client support for both REST and SOA. However, this is rarely a real problem (apart from the purely architectural perspective). The simplest clients usually have REST capabilities by the very definition of having an HTTP stack, and they rarely run the more complex operations.
Of course, your mileage may vary. The needs of your application (and its clients), local policies and backward compatibility requirements often seem to dominate these discusssions in forehand, so the REST vs. SOA discussion is rarely on a pure technical merit basis.
An eternal question! SOAP vs. REST....
SOAP is great because it's industrial-strength, the services are self-describing in a machine-readable and -interpretable way, e.g. your computer can read, understand a SOAP service and create a client for it. SOAP is great because it's methods and all the data it will ever pass around are described and defined in great detail. But SOAP is a bit heavy-weight - it takes a lot of infrastructure to support it. SOAP is also mostly method-oriented, e.g. you define your services by means of what methods can be executed on the service.
REST is much more light-weight - it uses only established HTTP protocol and procedures, so any device that has an HTTP stack (and what doesn't, these days) can access a REST service. No SOAP heavy-lifting needed. REST is also more resource-centric, e.g. you think about resources, collections of resources, and their properties and how to deal with those (so you're basically back to the core function of create, read, update, delete). REST doesn't currently have any machine-readable service description, so you're either left to try and see, or you need some documentation from the service provider, to know what resources you have at hand.
My personal bottom line: if you want to expose some collections of data, and reach (being able to access it from any device) and ease-of-use is more important than reliability and other enterprise-grade features, then REST is the way to go. If you need serious, business-to-business, well-defined, well-documented services that implement things like reliability, transaction support etc. - then you're definitely on the right track with SOAP.
But I really don't see a REST exclusive or SOAP kind of approach - they'll be great scenarios for both.
SOAP will give you a richer set of functionality and you can create strongly typed services, where REST is normally more lightweight and easier to get running.
Edit: If you look at the ws-* standards there is a lot of stuff http://www.soaspecs.com/ws.php. Development tools such as Visual Studio make access very easy to a lot of this though. Typically WS is used more for enterprise SOA type development while REST is used for public API's on web sites (at least thats what I think).
Android (mobile OS) do not have support for SOAP. RESTful services are a much better suit when mobile devices are the target.
I personally prefer RESTful services in most cases since they are more lightweight, easier to debug and integrate against (unless you use a code generator for SOAP).
One point that hasn't been mentioned is overhead. A recent REST project I worked on involved file transfer with items up to 2 GB allowed. If it had been implemented as a SOAP service, that'd be an automatic 30%+ increase in data due to encoding. The overhead with a REST service is all headers, the rest is straight data.
I want to separate modules of my program to communicate with each other. They could be on the same computer, but possibly on different ones.
I was considering 2 methods:
create a class with all details. Send it of to the communication layer. This one serializes it, sends it, the other side deserializes it back to the class and than handles it further.
Create a hashtable (key/value thing). Put all data in it. Send it of to the communicationlayer etc etc
So it boils down to hashtable vs class.
If I think 'loosely coupled', I favor hashtable. It's easy to have one module updated, include new extra params in the hastable, without updating the other side.
Then again with a class I get compile-time type checking, instead of runtime.
Has anyone tackled this previously and has suggestions about this?
Thanks!
edit:
I've awarded points to the answer which was most relevant to my original question, although it isn't the one which was upvoted the most
It sounds like you simply want to incorporate some IPC (Inter-Process Communication) into your system.
The best way of accomplishing this in .NET (3.0 onwards) is with the Windows Communication Foundation (WCF) - a generic framework developed by Microsoft for communication between programs in various different manners (transports) on a common basis.
Although I suspect you will probably want to use named pipes for the purposes of efficiency and robustness, there are a number of other transports available such as TCP and HTTP (see this MSDN article), not to mention a variety of serialisation formats from binary to XML to JSON.
One tends to hit this kind of problem in distributed systems design. It surfaces in Web Service (the WSDL defining the paramers and return types) Messaging systems where the formats of messages might be XML or some other well-defined format. The problem of controlling the coupling of client and server remains in all cases.
What happens with your hash table? Suppose your request contains "NAME" and "PHONE-NUMBER", and suddenly you realise that you need to differentiate "LANDLINE-NUMBER" and "CELL-NUMBER". If you just change the hash table entries to use new values, then your server needs changing at the same time. Suppose at this point you don't just have one client and one server, but are perhaps dealing with some kind of exchange or broker systems, many clients implemented by many teams, many servers implemented by many teams. Asking all of them to upgrade to a new message format at the same time is quite an undertaking.
Hence we tend to seek back-comptible solutions such as additive change, we preserve "PHONE-NUMBER" and add the new fields. The server now tolerates messages containg either old or new format.
Different distribution technologies have different in-built degrees of toleration for back-compatibility. When dealing with serialized classes can you deal with old and new versions? When dealing with WSDL, will the message parsers tolerate additive change.
I would follow the following though process:
1). Will you have a simple relationship between client and server, for example do you code and control both, are free to dictate their release cycles. If "no", then favour flexibility, use hash tables or XML.
2). Even if you are in control look at how easily your serialization framework supports versioning. It's likely that a strongly typed, serialized class interface will be easier to work with, providing you have a clear picture of what it's going to take to make a change to the interface.
You can use Sockets, Remoting, or WCF, eash has pros and cons.
But if the performance is not crucial you can use WCF and serialize and deserialize your classes, and for maximum performance I recommend sockets
What ever happened to the built in support for Remoting?
http://msdn.microsoft.com/en-us/library/aa185916.aspx
It works on TCP/IP or IPC if you want. Its quicker than WCF, and is pretty transparent to your code.
In our experience using WCF extensively over the last few years with various bindings we found WCF not be worth the hassle.
It is just to complicated to correctly use WCF including handling errors on channels correctly while retaining good performance (we gave up on high performance with wcf early on).
For authenticated client scenarios we switched to http rest (without wcf) and do json/protobuf payloading.
For high-speed non-authenticated scenarios (or at least non-kerberos authenticated scenarios) we are using zeromq and protobuf now.
Can you give me some advice on how to best ensure that two applications (one in C#, the other in Java) will be compatible and efficient in exchanging data? Are there any gotchas that you have encountered?
The scenario is point to point, one host is a service provider and the other is a service consumer.
Have a look at protobuf data interchange format. A .NET implementation is also available.
JSON for descriptive data, and XML for general data types. If that is not efficient enough for you, you need to roll your own codecs to handle the byte ordering difference between C# and Java.
Rather than focus on a particular technology, the best advice I can give is spend time focusing on the interface between the two (whether that be a web service, a database, or something else entirely). If it is a web service, for example, focus on creating a clear WDSL document. Interface, interface, interface. For the most part, try to ignore the specific technologies on each end, outside of some prototyping to ensure both languages support your choices.
Also, outside of major roadblocks, don't focus on efficiency. Focus on clarity. You'll likely have two teams (i.e. different people) working on either end of this interface. Making sure they use it correctly is far more important than making things just a little faster.
If you have Java as a webserver, you can use Jax-WS ( https://jax-ws.dev.java.net/ ) to create webservices and WCF for .Net to connect to the Java Webserver..
You can use something like XML (which isn't always that efficient) or you need to come up with your own proprietary binary format (efficient but a lot more work). I'd start with XML and if bandwidth becomes a problem, you can always switch to a proprietary binary format.
Something like SOAP (Wikipedia) is supported by both C# and Java.
We use C#/VB.Net for our Web interfaces and Java for our thick client. We use XML and webservices to communicate between the database and application servers. It works very well.
Make sure that you use a well defined protocol in order to communicate the data, and write tests in order to ensure that the applications responds according to contract.
This is such a broad question but I'd recommend focusing on standards that apply to both platforms; XML or some other standard form of serialization, using REST for services if they need to interoperate.
If you use XML, you can actually externalize your data access as XPath statements which can be stored in a shared resource used by both applications. That's a start.
We are building a system that interacts with an external system over TCP/IP using the FIX Protocol. I've used WCF to communicate from client to server, where I had control over both client and server, but never to an external TCP/IP based system. Is this possible with WCF? If so, could the community provide links for me to get started and faced in the right direction?
Unfortunately I do not have much more information that what is supplied above, as we are still in the early early planning stages. What we know is that we have an external vendor whose system will communicate with our system over TCP/IP. We would like to use this as a learning opportunity and learn WCF.
Possible? Possibly yes, but it's going to take some work.
For starters, you will need to write a custom WCF Transport Channel that handles the specifics of your TCP/IP based protocols (i.e. you'll need to write all the socket handling code and hook that into the WCF channel model). This is because the TCP channel in WCF isn't for this kind of work, but uses a relatively proprietary and undocumented wire protocol.
I'm not familiar enough with FIX to say how complex it would be, but there are some gotchas when writing WCF channels and documentation in that area isn't great.
The second part you'll need to deal with is message encoding. To WCF, all messages are XML. That is, once a message is passed on to the WCF stack, it has to look like an XML infoset at runtime. FIX doesn't use XML (afaik), so you'll need to adapt it a bit.
There are two ways you can go around it:
The easy way: Assume the server/client will use a specific interface and format for the data, and have your channel do all the hard work of translating the FIX messages to/from that format. The simplest example of this would be to have your WCF code use a simple service contract with one method taking a string and then just encapsulating the FIX message string into the XML format that satisfies the data contract serializer for that contract. The user code would still need to deal with decoding the FIX format later, though.
Do all the hard work in a custom WCF MessageEncoder. It's a bit more complex, but potentially cleaner and more reusable (and you could do more complex things like better streaming and so on).
The big question though is whether this is worth it. What is your reasoning for wanting to use WCF for this? Leveraging the programming model? I think that's an important consideration, but also keep in mind that the abstractions that WCF provides come at a price. In particular, some aspects of WCF can be problematic if you have very real-time requirements, which I understand is common in the kind of financial environment you're looking at.
If that's the case, it may very well be that you'd be better served by skipping WCF and sticking a little closer to the metal. You'll need to do the socket work anyway, so that's something to consider there.
Hope this helps :)
I don't think it is possible, at least it won't be easy to set it up because you don't know the communication protocol of the other end, except for it's TCP and accept FIX tags.
Why don't you within WCF application open TCP connection a SOCKET. That should do the trick in a simpler manner.
I think so. I have a system that I almost got working that was supposed to do almost exactly that (WCF over HTTP from the internet). The server provider seemed to not want to allow it thought so you will need the right permissions on that end to make it work.
Up shot: I don't see why not.
Not really - Ms didn't make the TCP/IP connection handler to talk to non-WCF services, they assumed you'd write a Web Service to do that.
This is discussed here on SO.