Can you give me some advice on how to best ensure that two applications (one in C#, the other in Java) will be compatible and efficient in exchanging data? Are there any gotchas that you have encountered?
The scenario is point to point, one host is a service provider and the other is a service consumer.
Have a look at protobuf data interchange format. A .NET implementation is also available.
JSON for descriptive data, and XML for general data types. If that is not efficient enough for you, you need to roll your own codecs to handle the byte ordering difference between C# and Java.
Rather than focus on a particular technology, the best advice I can give is spend time focusing on the interface between the two (whether that be a web service, a database, or something else entirely). If it is a web service, for example, focus on creating a clear WDSL document. Interface, interface, interface. For the most part, try to ignore the specific technologies on each end, outside of some prototyping to ensure both languages support your choices.
Also, outside of major roadblocks, don't focus on efficiency. Focus on clarity. You'll likely have two teams (i.e. different people) working on either end of this interface. Making sure they use it correctly is far more important than making things just a little faster.
If you have Java as a webserver, you can use Jax-WS ( https://jax-ws.dev.java.net/ ) to create webservices and WCF for .Net to connect to the Java Webserver..
You can use something like XML (which isn't always that efficient) or you need to come up with your own proprietary binary format (efficient but a lot more work). I'd start with XML and if bandwidth becomes a problem, you can always switch to a proprietary binary format.
Something like SOAP (Wikipedia) is supported by both C# and Java.
We use C#/VB.Net for our Web interfaces and Java for our thick client. We use XML and webservices to communicate between the database and application servers. It works very well.
Make sure that you use a well defined protocol in order to communicate the data, and write tests in order to ensure that the applications responds according to contract.
This is such a broad question but I'd recommend focusing on standards that apply to both platforms; XML or some other standard form of serialization, using REST for services if they need to interoperate.
If you use XML, you can actually externalize your data access as XPath statements which can be stored in a shared resource used by both applications. That's a start.
Related
I am considering replacing a .NET WFC duplex endpoint with gRPC. Like most frameworks, WCF allows for the data to just be simple contract objects so what you use over the wire is what you can use in your processing code (if you are ok with that coupling). But with gRPC and GPB, it looks like I can't do that and I have 2 options. One is to translate my existing .NET objects on both ends of the communication, which will add extra labor/complexity. The other is to use the protocol buffer messages verbatim in business code, which couples business code to transport technology.
So my question is .. what is the best solution to use gRPC and avoid translation or direct use of buffers in business code?
Both can be valid options: copy or use directly.
In larger/deeper systems it's good to translate to some "internal" objects that can have more fields and morph to the system without breaking clients. Those "internal" objects could even be protobuf messages. In this case, the duplication is a feature.
In smaller/shallow systems it's easy to directly use the protocol buffers without copying. You should realize that one day you may need to convert to some other version of the protos or make some sort of POJO or similar. But it's also possible that day never comes.
So the question isn't really, "Is it okay to use protocol buffers in business code," as that easily has few issues. But really the question is, "Is it worthwhile to allow the system internals to develop separately from its API?"
I have two applications: one in C#, the other in Java. I need a way to transfer data from the C# application in XML format to the Java application using some kind of service.
I have only worked with sockets before, but am looking for something less proprietary for future use with other applications. What other alternatives are there?
*Please note that the extent of my knowledge with working with sockets was a simple client/server written in java.
If both programs run on the same machine, you could of course also use files, but in general, this is how it goes down:
Create a webservice in C#, implementing a method that exposes your data.
Use the wsimport tool provided with the jdk, point it at the above created .wsdl file to generate java classes to use as a soap client.
Use generated classes to consume webservice.
(I see now you insist on XML. So forget about it)
These are completely distinct issues - it's like asking if I want to speak with you now, should we have a phone call in French or maybe mail correspondence in Mandarin. So it's:
Means of transferring data (S.A HTTP, or TCP, or whatever).
Some common structure of data.
Confusingly, both are regarded as 'protocols'.
Anyhow I'd say protobuf over HTTP is the most obvious and straight forward thing to use.
I'm looking for a technology which is targeting on building distributed applications. My friend adviced me to use CORBA (Java & C++ combination) . But I have read it's sort of obsolete stuff. I'm planning to write rather simple distributed application. What solutions would you advice to use? Thanks!
If you want to distribute your code logic to multiple servers and have it managed as a single entity, I would recommend CloudIQ Platform from Appistry. You can deploy Java, .NET and C/C++ code to the framework. From an administrative point of view, the servers work and act as one. When you submit a request for execution, the framework distributes the request to the best available worker, performing load balancing. With this framework, you can have producer/consumer, scatter/gather, and other parallel types of jobs.
The framework also monitors the execution of jobs, so if there is any type of hardware failure, other machines will get allocated the jobs that were running on the failed server.
CORBA is quite old. To choose a library or framework, the questions are: why do you want it to be distributed? (what's the goal? performance / parallelization? scalability? physical constraints on locations of parts of the system?) Which sort of nodes will be running the various parts? What languages would you rather use?
Recommend using ICE(Internet Communications Engine), ICE can support multiple operating system platform (Windows, Linux, Solars, Mac OS, iOS, Android...), multiple developing language (C++, Java, .NET, Python, Ruby, PHP), and it is simpler.
You can use SOAP web services. I'm currently developing distributed testing system on Python & .NET using using SOAP and it is easy to write and deploy.
There are a lot of different SOAP server/client libraries for different languages and platforms.
Yes, CORBA, and technologies like COM and DCOM are all pretty much obsolete... I am not sure exactly what you want to accomplish, but I would look towards .NET remoting to build distributed applications. If your application is really simple, you can even use mailslots or named pipes to pass simple data across a network.
As sinelaw mentioned, there are many questions before a good suggestion can be made, but, you may want to look at REST (http://en.wikipedia.org/wiki/Representational_State_Transfer) as a way to transfer data between applications. REST is nice in that what it can accept and return are flexible, for example, you can upload a file and return a PDF. Though it is used on http, that isn't the only allowed protocol. It is language/platform agnostic.
If you want to go with something that is standardized then SOAP or REST is probably your best bet, if you want to be platform-independent. If you don't mind being restricted to Java/JVM or .NET then there are other options, but that becomes very restricting.
What type of data is being passed? How critical is security? What platforms/languages should be usable? What is the purpose of the program, the goal?
If you want a portable solution that can also be used with different protocols, WCF on Mono might be a good fit
For .Net I suggest you WCF , it's quite simple to implement and very flexible, and about CORBA it's a good choice if your goal is to understand deeply distributed applications, but it's not more recommended for real projects, currently is very difficult to find developers mastering CORBA.
I want to separate modules of my program to communicate with each other. They could be on the same computer, but possibly on different ones.
I was considering 2 methods:
create a class with all details. Send it of to the communication layer. This one serializes it, sends it, the other side deserializes it back to the class and than handles it further.
Create a hashtable (key/value thing). Put all data in it. Send it of to the communicationlayer etc etc
So it boils down to hashtable vs class.
If I think 'loosely coupled', I favor hashtable. It's easy to have one module updated, include new extra params in the hastable, without updating the other side.
Then again with a class I get compile-time type checking, instead of runtime.
Has anyone tackled this previously and has suggestions about this?
Thanks!
edit:
I've awarded points to the answer which was most relevant to my original question, although it isn't the one which was upvoted the most
It sounds like you simply want to incorporate some IPC (Inter-Process Communication) into your system.
The best way of accomplishing this in .NET (3.0 onwards) is with the Windows Communication Foundation (WCF) - a generic framework developed by Microsoft for communication between programs in various different manners (transports) on a common basis.
Although I suspect you will probably want to use named pipes for the purposes of efficiency and robustness, there are a number of other transports available such as TCP and HTTP (see this MSDN article), not to mention a variety of serialisation formats from binary to XML to JSON.
One tends to hit this kind of problem in distributed systems design. It surfaces in Web Service (the WSDL defining the paramers and return types) Messaging systems where the formats of messages might be XML or some other well-defined format. The problem of controlling the coupling of client and server remains in all cases.
What happens with your hash table? Suppose your request contains "NAME" and "PHONE-NUMBER", and suddenly you realise that you need to differentiate "LANDLINE-NUMBER" and "CELL-NUMBER". If you just change the hash table entries to use new values, then your server needs changing at the same time. Suppose at this point you don't just have one client and one server, but are perhaps dealing with some kind of exchange or broker systems, many clients implemented by many teams, many servers implemented by many teams. Asking all of them to upgrade to a new message format at the same time is quite an undertaking.
Hence we tend to seek back-comptible solutions such as additive change, we preserve "PHONE-NUMBER" and add the new fields. The server now tolerates messages containg either old or new format.
Different distribution technologies have different in-built degrees of toleration for back-compatibility. When dealing with serialized classes can you deal with old and new versions? When dealing with WSDL, will the message parsers tolerate additive change.
I would follow the following though process:
1). Will you have a simple relationship between client and server, for example do you code and control both, are free to dictate their release cycles. If "no", then favour flexibility, use hash tables or XML.
2). Even if you are in control look at how easily your serialization framework supports versioning. It's likely that a strongly typed, serialized class interface will be easier to work with, providing you have a clear picture of what it's going to take to make a change to the interface.
You can use Sockets, Remoting, or WCF, eash has pros and cons.
But if the performance is not crucial you can use WCF and serialize and deserialize your classes, and for maximum performance I recommend sockets
What ever happened to the built in support for Remoting?
http://msdn.microsoft.com/en-us/library/aa185916.aspx
It works on TCP/IP or IPC if you want. Its quicker than WCF, and is pretty transparent to your code.
In our experience using WCF extensively over the last few years with various bindings we found WCF not be worth the hassle.
It is just to complicated to correctly use WCF including handling errors on channels correctly while retaining good performance (we gave up on high performance with wcf early on).
For authenticated client scenarios we switched to http rest (without wcf) and do json/protobuf payloading.
For high-speed non-authenticated scenarios (or at least non-kerberos authenticated scenarios) we are using zeromq and protobuf now.
We are building a system that interacts with an external system over TCP/IP using the FIX Protocol. I've used WCF to communicate from client to server, where I had control over both client and server, but never to an external TCP/IP based system. Is this possible with WCF? If so, could the community provide links for me to get started and faced in the right direction?
Unfortunately I do not have much more information that what is supplied above, as we are still in the early early planning stages. What we know is that we have an external vendor whose system will communicate with our system over TCP/IP. We would like to use this as a learning opportunity and learn WCF.
Possible? Possibly yes, but it's going to take some work.
For starters, you will need to write a custom WCF Transport Channel that handles the specifics of your TCP/IP based protocols (i.e. you'll need to write all the socket handling code and hook that into the WCF channel model). This is because the TCP channel in WCF isn't for this kind of work, but uses a relatively proprietary and undocumented wire protocol.
I'm not familiar enough with FIX to say how complex it would be, but there are some gotchas when writing WCF channels and documentation in that area isn't great.
The second part you'll need to deal with is message encoding. To WCF, all messages are XML. That is, once a message is passed on to the WCF stack, it has to look like an XML infoset at runtime. FIX doesn't use XML (afaik), so you'll need to adapt it a bit.
There are two ways you can go around it:
The easy way: Assume the server/client will use a specific interface and format for the data, and have your channel do all the hard work of translating the FIX messages to/from that format. The simplest example of this would be to have your WCF code use a simple service contract with one method taking a string and then just encapsulating the FIX message string into the XML format that satisfies the data contract serializer for that contract. The user code would still need to deal with decoding the FIX format later, though.
Do all the hard work in a custom WCF MessageEncoder. It's a bit more complex, but potentially cleaner and more reusable (and you could do more complex things like better streaming and so on).
The big question though is whether this is worth it. What is your reasoning for wanting to use WCF for this? Leveraging the programming model? I think that's an important consideration, but also keep in mind that the abstractions that WCF provides come at a price. In particular, some aspects of WCF can be problematic if you have very real-time requirements, which I understand is common in the kind of financial environment you're looking at.
If that's the case, it may very well be that you'd be better served by skipping WCF and sticking a little closer to the metal. You'll need to do the socket work anyway, so that's something to consider there.
Hope this helps :)
I don't think it is possible, at least it won't be easy to set it up because you don't know the communication protocol of the other end, except for it's TCP and accept FIX tags.
Why don't you within WCF application open TCP connection a SOCKET. That should do the trick in a simpler manner.
I think so. I have a system that I almost got working that was supposed to do almost exactly that (WCF over HTTP from the internet). The server provider seemed to not want to allow it thought so you will need the right permissions on that end to make it work.
Up shot: I don't see why not.
Not really - Ms didn't make the TCP/IP connection handler to talk to non-WCF services, they assumed you'd write a Web Service to do that.
This is discussed here on SO.