Communication between programs in .NET - c#

I want to separate modules of my program to communicate with each other. They could be on the same computer, but possibly on different ones.
I was considering 2 methods:
create a class with all details. Send it of to the communication layer. This one serializes it, sends it, the other side deserializes it back to the class and than handles it further.
Create a hashtable (key/value thing). Put all data in it. Send it of to the communicationlayer etc etc
So it boils down to hashtable vs class.
If I think 'loosely coupled', I favor hashtable. It's easy to have one module updated, include new extra params in the hastable, without updating the other side.
Then again with a class I get compile-time type checking, instead of runtime.
Has anyone tackled this previously and has suggestions about this?
Thanks!
edit:
I've awarded points to the answer which was most relevant to my original question, although it isn't the one which was upvoted the most

It sounds like you simply want to incorporate some IPC (Inter-Process Communication) into your system.
The best way of accomplishing this in .NET (3.0 onwards) is with the Windows Communication Foundation (WCF) - a generic framework developed by Microsoft for communication between programs in various different manners (transports) on a common basis.
Although I suspect you will probably want to use named pipes for the purposes of efficiency and robustness, there are a number of other transports available such as TCP and HTTP (see this MSDN article), not to mention a variety of serialisation formats from binary to XML to JSON.

One tends to hit this kind of problem in distributed systems design. It surfaces in Web Service (the WSDL defining the paramers and return types) Messaging systems where the formats of messages might be XML or some other well-defined format. The problem of controlling the coupling of client and server remains in all cases.
What happens with your hash table? Suppose your request contains "NAME" and "PHONE-NUMBER", and suddenly you realise that you need to differentiate "LANDLINE-NUMBER" and "CELL-NUMBER". If you just change the hash table entries to use new values, then your server needs changing at the same time. Suppose at this point you don't just have one client and one server, but are perhaps dealing with some kind of exchange or broker systems, many clients implemented by many teams, many servers implemented by many teams. Asking all of them to upgrade to a new message format at the same time is quite an undertaking.
Hence we tend to seek back-comptible solutions such as additive change, we preserve "PHONE-NUMBER" and add the new fields. The server now tolerates messages containg either old or new format.
Different distribution technologies have different in-built degrees of toleration for back-compatibility. When dealing with serialized classes can you deal with old and new versions? When dealing with WSDL, will the message parsers tolerate additive change.
I would follow the following though process:
1). Will you have a simple relationship between client and server, for example do you code and control both, are free to dictate their release cycles. If "no", then favour flexibility, use hash tables or XML.
2). Even if you are in control look at how easily your serialization framework supports versioning. It's likely that a strongly typed, serialized class interface will be easier to work with, providing you have a clear picture of what it's going to take to make a change to the interface.

You can use Sockets, Remoting, or WCF, eash has pros and cons.
But if the performance is not crucial you can use WCF and serialize and deserialize your classes, and for maximum performance I recommend sockets

What ever happened to the built in support for Remoting?
http://msdn.microsoft.com/en-us/library/aa185916.aspx
It works on TCP/IP or IPC if you want. Its quicker than WCF, and is pretty transparent to your code.

In our experience using WCF extensively over the last few years with various bindings we found WCF not be worth the hassle.
It is just to complicated to correctly use WCF including handling errors on channels correctly while retaining good performance (we gave up on high performance with wcf early on).
For authenticated client scenarios we switched to http rest (without wcf) and do json/protobuf payloading.
For high-speed non-authenticated scenarios (or at least non-kerberos authenticated scenarios) we are using zeromq and protobuf now.

Related

How can I translate Google Protocol Buffers to plain objects

I am considering replacing a .NET WFC duplex endpoint with gRPC. Like most frameworks, WCF allows for the data to just be simple contract objects so what you use over the wire is what you can use in your processing code (if you are ok with that coupling). But with gRPC and GPB, it looks like I can't do that and I have 2 options. One is to translate my existing .NET objects on both ends of the communication, which will add extra labor/complexity. The other is to use the protocol buffer messages verbatim in business code, which couples business code to transport technology.
So my question is .. what is the best solution to use gRPC and avoid translation or direct use of buffers in business code?
Both can be valid options: copy or use directly.
In larger/deeper systems it's good to translate to some "internal" objects that can have more fields and morph to the system without breaking clients. Those "internal" objects could even be protobuf messages. In this case, the duplication is a feature.
In smaller/shallow systems it's easy to directly use the protocol buffers without copying. You should realize that one day you may need to convert to some other version of the protos or make some sort of POJO or similar. But it's also possible that day never comes.
So the question isn't really, "Is it okay to use protocol buffers in business code," as that easily has few issues. But really the question is, "Is it worthwhile to allow the system internals to develop separately from its API?"

Dispatch database data to several consumer in different format

I've a big database which contains a lot of data from a big enterprise.
We would like to be able to dispatch this data to different external applications (external, meaning that are not developed by us, but only accessible in our local network).
Consumers can be of very different kinds: accounting, reporting, tech(business), website, ...
With a big variety of formats: CSV, webservice, RSS, Excel, ...
The execution of these exports can be of two different types: scheduled (like every hour), or on demand.
There is mostly two kind of exports: almost-real-time-data(meaning we want to have current data), or statistical data(meaning we are taking in account a period of time).
I've yet to find a good approach to allows those access.
I thought about Biztalk, but I don't know this product very well, and I'm not sure it can make scheduled calls and have business logic. Does anyone have enough knowledge of Biztalk to indicate to me if it can fit my needs?
If Biztalk isn't a good way, is there any libraries which can ease the development of a custom service?
Biztalk can be made to do what you want to do i.e. Extract data from your database, transform it into various formats and send it to various systems on a scheduled basis or as and when required by exposing this as a webservice/WCF Service (Not entirely out of the box, but you might need to purchase additional adapters, pipelines, etc).
But, the question here is, how database intensive is this task? If its large volumes of data, clearly Biztalk is not a favorite candidate, as Biztalk struggles with large data. Its good for routing (without transforming/inspecting) though, even if its large data files.
SSIS, on the other hand is good for data intensive tasks. If your existing databases are on SQL Server, then it fits even better for your data intensive exports/imports and transformations. But it falls short when it comes to the variety of ways you need to connect to external systems (protocols).
So, you are looking at a combination of a good ETL tool, like SSIS, as well as something good at routing like Biztalk. Neither of them clearly fit your needs on their own, in terms of scalability, volumes, connectivity, data formats, etc.
Your question can result in quite a broad implementation. You could consider using a service bus (pub/sub) along with some form of CQRS (if applicable).
My FOSS Shuttle ESB project is here: http://shuttle.codeplex.com/
It has a generic scheduler built in. You could, of course, go with any other service bus such as MassTransit, or NServiceBus.
I think you could use ASP.NET MVC API. http://www.asp.net/web-api
I find it the easiest way to export different kind of info and file formats.
It won't generate scheduled reports or files, you will need the client app or a windows service to call the app. Similar to webservices, but it can return different formats and also files.
And creating excel files, etc. you have to create them manually. Thats a bit of a turndown, but i like this approach because it can be easily hosted on IIS and all the functions your clients are going to call can be on the same place and even called from javascript, so as i see it is a bit more work for you, but it creates really easy to consume services.
By dispatch, I'm assuming you're looking for a pub/sub model. Take a hard look at NServiceBus's (NSB) pub/sub capabilities, http://nservicebus.com/docs/Samples/PublishSubscribe.aspx. Underneath the covers NSB makes heavy use of MSMQ, which has become a lot more stable over time.
If you want to venture outside of your .NET comfort zone, check out Apache Camel or Fuse's Enterprise Service Bus. Either of these tools will support what you need as well. I've used Camel in some extremely high throughput areas without any major issues.

I can't create a clear picture, why and when to use RESTful services? [duplicate]

This question already has answers here:
Why do we need RESTful Web Services?
(8 answers)
Closed 9 years ago.
Why and when to use RESTful services?
I know how to create a WCF webservice. But I am not able to comprehend when to use a SOAP based service and when to use a RESTful service. I read many articles on SOAP vs REST, but still, I don't have a clear picture of why and when to use RESTful services.
What are some concrete points in order to easily decide between these services?
This is a worthy question, and one for which a short answer does no justice. Forgetting about the fact that most people may be more familiar with SOAP than REST, I think there are a few key points in this:
First and foremost, I would suggest using REST wherever it fits naturally. If your main use scenarios involve reading and updating data atoms ("resources"), REST provides a more lightweight, discoverable and straightforward approach to data access. Also, building really thin clients (mobile devices, JavaScript, even shell scripts) is often easier with REST.
For example: If your data model is all about customers and your main operations involve reading the customers and writing back changes, REST will do just fine. Using GET/POST/PUT/DELETE HTTP protocols is an excellent way to make the protocol very discoverable and easy to use, even for somebody not intimately familiar with your application.
This, however, brings us to the second point.
What if you need to offer a web API with querying capabilities? For example, a typical scenario might be "Get me the 5 newest customers". In this scenario, pure REST provides little in terms of API discoverability. Enter OData (www.odata.org), and you're rolling again; from this viewpoint, OData URI based query syntax adds a touch of well-known abstraction above the normal extremely simplistic, ID-based addressing of REST services.
But then, there are aspects which can be reasonably hard to represent in terms of REST. Point three: If you can't model it reasonably cleanly, consider SOA.
For example, if a common usage scenario involves transitioning customers between workflow stages (say, "new customer", "credit request received", "credit approved"), modeling such stages with REST may prove complex. Should different stages be represented just as an attribute value in an entity? Or perhaps, should the different stages be modeled as containers wherein the customers lie? If it's an attribute, do you always want to do a full PUT when updating it? Should you perhaps use a custom HTTP verb ("APPROVE http://mysite/customers/contoso HTTP/1.0")?
These are valid questions to which there are no universal answers. Everything can be modeled in REST, but at some point the abstraction breaks down so much that much of REST's human-facing benefits (discoverability, ease of understanding) are lost. Of course, technical benefits (like all the HTTP-level goodness) can still be reaped, but in most realities they're not really the critical argument anyway.
Fourth and finally, there are things which the SOA model simply does great. Perhaps the most important of these is transactions. While it's a pretty complex generic problem in the WS-* world as well, generic transactions are rarely needed and can often be replaced with reasonably simple, atomic operations.
For example, consider a scenario where you want to create an operation that allows the merging of two customers and all their purchases under one account. Of course, all this needs to happen or not happen; a typical transaction scenario. Modeling this in REST requires a nontrivial amount of effort. For a specialized scenario such as this, the easy SOA approach would be to create one operation (MergeCustomers) which implements the transaction internally.
For more advanced scenarios, the WS-* stack provides facilities not readily available in the REST world (including WS-Transaction, WS-Security and whatnot). While most APIs need none of this (or are better off implementing them in a more simple way), I don't think it's worth the effort to rewrite all that just to be 100% REST.
Look into the best of both worlds. For the vast majority of scenarios, it is completely acceptable to have the basic CRUD in REST and provide a few specialized operations in SOA.
Also, these APIs can be designed to act together. For example, what should a SOA-based MergeCustomers operation return? It might return a serialized copy of the merged customer, but in most cases I would opt for returning a URI of the REST resource that is the newly merged customer. This way, you would always have a single representation of the customer even if SOA were necessary for specialized scenarios.
The previous approach has the drawback that it requires client support for both REST and SOA. However, this is rarely a real problem (apart from the purely architectural perspective). The simplest clients usually have REST capabilities by the very definition of having an HTTP stack, and they rarely run the more complex operations.
Of course, your mileage may vary. The needs of your application (and its clients), local policies and backward compatibility requirements often seem to dominate these discusssions in forehand, so the REST vs. SOA discussion is rarely on a pure technical merit basis.
An eternal question! SOAP vs. REST....
SOAP is great because it's industrial-strength, the services are self-describing in a machine-readable and -interpretable way, e.g. your computer can read, understand a SOAP service and create a client for it. SOAP is great because it's methods and all the data it will ever pass around are described and defined in great detail. But SOAP is a bit heavy-weight - it takes a lot of infrastructure to support it. SOAP is also mostly method-oriented, e.g. you define your services by means of what methods can be executed on the service.
REST is much more light-weight - it uses only established HTTP protocol and procedures, so any device that has an HTTP stack (and what doesn't, these days) can access a REST service. No SOAP heavy-lifting needed. REST is also more resource-centric, e.g. you think about resources, collections of resources, and their properties and how to deal with those (so you're basically back to the core function of create, read, update, delete). REST doesn't currently have any machine-readable service description, so you're either left to try and see, or you need some documentation from the service provider, to know what resources you have at hand.
My personal bottom line: if you want to expose some collections of data, and reach (being able to access it from any device) and ease-of-use is more important than reliability and other enterprise-grade features, then REST is the way to go. If you need serious, business-to-business, well-defined, well-documented services that implement things like reliability, transaction support etc. - then you're definitely on the right track with SOAP.
But I really don't see a REST exclusive or SOAP kind of approach - they'll be great scenarios for both.
SOAP will give you a richer set of functionality and you can create strongly typed services, where REST is normally more lightweight and easier to get running.
Edit: If you look at the ws-* standards there is a lot of stuff http://www.soaspecs.com/ws.php. Development tools such as Visual Studio make access very easy to a lot of this though. Typically WS is used more for enterprise SOA type development while REST is used for public API's on web sites (at least thats what I think).
Android (mobile OS) do not have support for SOAP. RESTful services are a much better suit when mobile devices are the target.
I personally prefer RESTful services in most cases since they are more lightweight, easier to debug and integrate against (unless you use a code generator for SOAP).
One point that hasn't been mentioned is overhead. A recent REST project I worked on involved file transfer with items up to 2 GB allowed. If it had been implemented as a SOAP service, that'd be an automatic 30%+ increase in data due to encoding. The overhead with a REST service is all headers, the rest is straight data.

Cross language C# and Java development

Can you give me some advice on how to best ensure that two applications (one in C#, the other in Java) will be compatible and efficient in exchanging data? Are there any gotchas that you have encountered?
The scenario is point to point, one host is a service provider and the other is a service consumer.
Have a look at protobuf data interchange format. A .NET implementation is also available.
JSON for descriptive data, and XML for general data types. If that is not efficient enough for you, you need to roll your own codecs to handle the byte ordering difference between C# and Java.
Rather than focus on a particular technology, the best advice I can give is spend time focusing on the interface between the two (whether that be a web service, a database, or something else entirely). If it is a web service, for example, focus on creating a clear WDSL document. Interface, interface, interface. For the most part, try to ignore the specific technologies on each end, outside of some prototyping to ensure both languages support your choices.
Also, outside of major roadblocks, don't focus on efficiency. Focus on clarity. You'll likely have two teams (i.e. different people) working on either end of this interface. Making sure they use it correctly is far more important than making things just a little faster.
If you have Java as a webserver, you can use Jax-WS ( https://jax-ws.dev.java.net/ ) to create webservices and WCF for .Net to connect to the Java Webserver..
You can use something like XML (which isn't always that efficient) or you need to come up with your own proprietary binary format (efficient but a lot more work). I'd start with XML and if bandwidth becomes a problem, you can always switch to a proprietary binary format.
Something like SOAP (Wikipedia) is supported by both C# and Java.
We use C#/VB.Net for our Web interfaces and Java for our thick client. We use XML and webservices to communicate between the database and application servers. It works very well.
Make sure that you use a well defined protocol in order to communicate the data, and write tests in order to ensure that the applications responds according to contract.
This is such a broad question but I'd recommend focusing on standards that apply to both platforms; XML or some other standard form of serialization, using REST for services if they need to interoperate.
If you use XML, you can actually externalize your data access as XPath statements which can be stored in a shared resource used by both applications. That's a start.

What am I missing about WCF?

I've been developing in MS technologies for longer than I care to remember at this stage. When .NET arrived on the scene I thought they hit the nail on the head and with each iteration and version I thought their technologies were getting stronger and stronger and looked forward to each release.
However, having had to work with WCF for the last year I must say I found the technology very difficult to work with and understand. Initially it's quite appealing but when you start getting into the guts of it, configuration is a nightmare, having to override behaviours for message sizes, number of objects contained in a messages, the complexity of the security model, disposing of proxies when faulted and finally moving back to defining interfaces in code rather than in XML.
It just does not work out of the box and I think it should. We found all of the above issues while either testing ourselves or else when our products were out on site.
I do understand the rationale behind it all, but surely they could have come up with simpler implementation mechanism.
I suppose what I'm asking is,
Am I looking at WCF the wrong way?
What strengths does it have over the
alternatives?
Under what circumstances should I
choose to use WCF?
OK Folks, Sorry about the delay in responding, work does have a nasty habit of get in the way sometimes :)
Some clarifications
My main paint point with WCF I suppose falls down into the following areas
While it does work out of the box, your left with some major surprises under the hood. As pointed out above basic things are restricted until they are overridden
Size of string than can be passed can't be over 8K
Number of objects that can be passed in a single message is restricted
Proxies not automatically recovering from failures
The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it.
Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembly, but it's full of rubbish and certain code generators choke on it.
I know the world moves on, I've moved on a number of times over the last (ahem 22 years I've been developing) and am actively using WCF, so don't get me wrong, I do understand what it's for and where it's heading.
I just think there should be simpler configuration/deployment options available, easier set-up and better management for configuration (SQL config provider maybe, rather than just the web.config/app.config files).
I use WCF all the time now and I share your pain. It seems like it was grossly over-engineered, but we are going to be stuck with it for a long, long time so I'm trying to learn it.
One thing I am certain about, XML sucks. I've had nothing but problems using XML to control it and have since switched to handling everything via code.
The concerns you listed were:
Size of string than can be passed can't be over 8K
Number of objects that can be passed in a single message is restricted
Proxies not automatically recovering from failures
The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it.
Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembly, but it's full of rubbish and certain code generators choke on it.
here's my take:
(1) addressed a valid concern that customers had with ASMX. It was too wide-open, with no way to easily control it. The 8k limit is easily lifted if you know where to look. I guess you can count that as a surprise, but it's more of a one-time thing. Once you know about it, you can lift it and be done with it forever, if you choose.
(2) is also configurable.
(3) is known, but there are boilerplate ways to work around this. The StockTrader code for example, demonstrates a proven pattern. You can re-use the code in your own app. Not sure if this is fixed in WCF for .NET 4.0. I know it was an open request.
(4) The config is a beast. This is a concern for a lot of people. The problem here is that WCF is so flexible, and config of all of that flexibility is exposed through xml files. It can be overwhelming. An approach that seems to work is to take it in small bites, as you need it.
(5) I don't understand.
I vastly prefer ASP.NET MVC and Web API over WCF. If I had to summarize WCF to a developer who was just being introduced to it, I would say, "WCF is a well-meaning attempt to replace over-engineered, Java EE style RPC development." Unfortunately, many of the decisions made require you to become an expert in configuring low level, unimportant items (message sizes, timeouts, uninteresting protocol elements, etc.) while abstracting absolutely critical pieces (URL design, parameter serialization, response serialization, etc.). The difference in productivity and aggravation between teams I know using WCF vs. Web API is night and day.
To come clean a little: I have always hated the core concept of .NET Remoting. I feel that developers need a thorough understanding of the resource structure of their application and how these resources are serialized. Furthermore, the use of the "POST" verb for simple data retrieval is worrisome in a read heavy application that needs to scale.
I'll address the rest of your issues after clarification. In the meantime, I can address your question on when you should choose to use WCF: always.
WCF is the replacement for the old ASMX technologies, including WSE. It is also the replacement for .NET Remoting. It is the only technology upon which high-level communications features in .NET will be based for the forseeable future.
For example, consider Windows Azure. It was not inevitable that the new concept of "cloud computing" would have its communications aspects covered by WCF. Yet, WCF was flexible enough to be extended to cover those cases, with very little change in code.
If you're having trouble with WCF, then you'd do well to make sure Microsoft knows about it. WCF is the present and future of web service and other service-oriented development in .NET, so they've got a very strong incentive to listen to you and resolve your pain points. Either contact them directly through Connect, or ask questions here on SO (tag with WCF, please), and a lot of people will help you.
Biggest advantage of using WCF from a programmer's point of view: separates the definition of exposed services (operations, contracts, etc.) from the protocol's specific details, unlike ASMX where you expose a class as a web service directly in the code using attributes. Using a real example of mine: we where able to easily switch the transport protocol between web services and named pipes, whatever suited better the deployment and performance needs, without changing a line of code.
WCF is intended to SOA methodologies. Work professionally using it is a nightmare. I delivered a SOA solution using WCF as tool and hell, hundreds configurations and hidden tips! My past distributed solution using old style Web Services and Remoting were more stable. I've spent days working out the solution for the error "The underlying connection was closed: An unexpected error occurred" which makes no sense to happen for one method among 4 in the same contract. I'm very disappointed. It took me back through time where .net was first introduced with lots of promises and when we got hands on, hell, log problems came up!
To address the problem of maintenance nightmare of application config, some standard like UDDI or WS-Discovery exist, WS-Discovery will be supported by WCF in .NET 4.0.
Keeping the configuration of the
interfaces in code rather than moving
to explicitly defined interfaces in
XML, which can be published and
consumed by almost anything. I know we
can export the XML from the assembley,
but it's full of rubbish and certain
code generators choke on it.
Can you be more explicit ? I think you are talking about service behavior configured in code.
You can easily code behavior extensions to configure what your are talking about in config file instead of code BUT I think that if microsoft didn't do that there is a good reason.
For example a service with this behavior :
[ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall, ConcurrencyMode=ConcurrencyMode.Single)]
The implementation knows that the instance is not shared between multiple thread so it's developed differently than :
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single, ConcurrencyMode=ConcurrencyMode.Multiple)]
In this case the service implementation should take care about concurency problems.
The implementation is coupled with the attribute ServiceBehavior, so moving this behavior in a XML file is not a good idea.
What if you can change a InstanceContextMode.PerCall service to a InstanceContextMode.Single service inside the config file ? You break the application !
Looking at how you mention XML and SQL, you are using WCF to build a web application or an actual web service (service on the Web, and not just SOAP exchange).
It helps thinking about WCF as a replacement for .NET Remoting (or DCOM, CORBA etc), which also happens to support web services as one of the transports. Interfaces declared in assemblies, behavior of proxies, certain configuration options and other aspects of the framework that look unnatural and complicated from perspective of web apps - actually do work out of the box for DCOM-style systems of distributed objects.
To answer the question: no, you are not missing anything and using WCF for web applications is complicated, because WCF is not a framework for building web applications. Probably such framework can be built on top of it, but I would hate to see WCF itself changed to move into web realm.

Categories