OO Design for communication methodology that will change - c#

I am on a project where I will be creating a Web service that will act as a "facade" to several stand alone systems (via APIs) and databases. The web service will be the sole method that a separate web application will use to communicate with these external resources.
I know for a fact that the communication methodology of one of the APIs that the web service must communicate with will change at some undetermined point in the future.
I expect the web service itself to abstract the details of the change in communication methodology between the Web application and the external API. My main concern is how to design the internals of the web service. What are some prescribed ways of using OO design to create an appropriate level of abstraction such that the change in communication method can be handled cleanly? Is there a recommended design pattern?

As you described, it sounds like you are already using the facade pattern here. The web service is in fact the facade to the other services. If an API between the web service and one of the external resources changes, the key is to not let this affect the API of the web service itself. Users of the web services should not need to know the internals of how the web service communicates with the external resources.
If the web service has methods doX and doY for example, none of the callers of doX and doY should care what is going on under the hood. So as long as you maintain the API between the clients of the web service and the web service, you should be set.

I've frequently faced a similar problem, where I would have a new facade (typically a Java class), and then some new "middleware" that would eventually communicate to services located somewhere else.
I would have to support multiple mediums of communication, including in-process, and via the net (often with encryption).
My usual solution is define a notion of a data packet, with its subtypes containing specific forms of data (e.g., specific responses, specific requests), etc. The important thing is that all the packets must be Serializable in some form (Java has a notion for this, I'm not sure about C++).
I then have an agent and a provider. The agent takes program-domain requests, creates packats. It moves them to a stub-skeleton that is responsible only for communicating. The remote stub takes the packet and gives it to a provider. The provider translates it back to a domain object which it then provides to the actual services. It takes the response, sends it back to the agent via the skeleton-stub, etc.
The advantage of this approach is that I create several layers of abstraction. The agent/provider are focused on domain level and its translation into packets and back. The skeleton-stub pair is responsible for marhsalling and sending packets back and forth. By swapping my skeleton-stub pair with subtypes, I can have the same program communicate in different ways (e.g., embedded in the same JVM, via something like JMS, directly via sockets, etc.)

This shouldn't affect the service you create at all (from the user's perspective). Services are about contracts - your service will provide a contract with its users - they send you a specific request and you send back a specific response. You also have a contract with this other API. If they change how they want to communicate, you can handle that internally, but as long as your contract with your users does not change they wont notice a thing.
One way to accomplish this is to not simply pass through the exact object that you get from the "real" API. You can create your own object that you send back in response. You then translate their object into your object. That way if the "real" API changes things on their end you can choose how to send that back on your end.
As the middle man you should be set up so that your end users need to know nothing about the originating API.

Related

OData wrapper around SignalR

This may seem a little crazy, but while reading about OData I've come across some articles that claim run of the mill WCF Web Services can be exposed as OData endpoints (via some black magic or other).
The thing is, it may be practical for me to expose a SignalR web service with an OData endpoint. Is this something that is possible to do with the currently available frameworks? The SignalR service is used to extract data from one of the connected clients known as a "provider" which multiple consuming clients can request data from.
Edit -
I have a set of existing SignalR hubs for each type of entity (or resource in terms of OData), and each of these Hubs expose methods in a similar fashion to a Repository eg.
public class CustomersHub : Hub
{
public IEnumerable<CustomerData> GetCustomers( )
public IEnumerable<CustomerData> GetCustomers(IEnumerable<int> ids);
// with odata this method may not be necessary.
public IEnumerable<CustomerData> FindCustomersByName(IEnumerable<string> names);
/// ...
}
Hopefully this shows a striking resemblance to ODataController derived classes. In this situation the Hubs are the resources.
Something very similar to what I am asking for was implemented for WCF Data Services in the WCF Data Service Toolkit which AFAIK isn't active anymore, plus I'm trying to do this with SignalR.
If you ask why I am using SignalR for this type of service, it is because the data does not reside on the server the Web Service is hosted on, certain clients connected to the Web Service have that data. Inside the methods is a call to another client (besides the Caller) who is sent a request for the data.
From the official OData specification (Introduction):
The Open Data Protocol (OData) enables the creation of REST-based data services, which allow resources, identified using Uniform Resource Locators (URLs) and defined in a data model, to be published and edited by Web clients using simple HTTP messages. This specification defines the core semantics and the behavioral aspects of the protocol.
None of this characteristics maps over SignalR, which is a realtime, non-resource-based technology based on a variety of HTTP/HTML techniques/hacks in order to push information. You can of course use SignalR to implement something along the lines of what you described, but as long as that would not implement a REST and purely HTTP-based request/response approach (and with SignalR it would not), that would not be OData by definition. Have a look at the specs and you will quickly see how that does not map over SignalR.
UPDATE
Even after your edit, it does not make sense to me. It's like you want some magic happening to enable you using walkie-talkies through the post office just because you can query their catalog.
Maybe you should distinguish your clients in consumers and producers, offer OData endpoints to the formers and hubs to the others, and do the necessary "magic" yourself.
Also, with SignalR I don't think you would have any IQueryable support needed to have out of the box OData plumbing.

Creating a SOAP API in ASP.NET that sends out data to a separate instance of the same application

I apologize for my rather vague question, but I am at a bit of a loss. The documentation for MSDN API functionality is very confusing and convoluted to say the least, and I have been tasked to figure out a way to get the desired functionality mentioned in the title.
Basically, my intent is to create an API using SOAP that sends out data to a separate instance of the same application, with the intent of synchronizing their databases. It is important that this data is sent by the primary database, and not requested by the receiving end, as that could create a security hole in the architecture if our primary database was that open.
I don't want to simply ask for code, but a very lightweight example of how this could be achieved on both ends would be extremely helpful. I don't really have anything significant to show other than a very simple Test Service.
Is the problem that those other systems cannot initiate sessions due to network security? The challenge is that Soap services are made to receive a request. In order to do what you want, the client systems will have to have a soap service or some other kind of service that yours can call too. Then your service can synchronize to at least tell them to call your service. There are lot's of ways to approach this with the kind of architecture I mentioned.

What is the best way to communicate between a WCF service and separate threads?

The wording of the question doesn't necessarily do the issue justice...
I've got a client UI sitting on a local box with and a background windows service to support it while it performs background functions.
The client UI is just the presentation layer and the windows service does all the hard hitting action... so there needs to be communication between the two of them. After spending a while on google and reading best practices, I decided to make the service layer using WCF and named pipes.
The client UI is the WCF client and the windows service acts as the WCF host (hosting locally only) to support the client.
So this works fine, as it should. The client UI can pass data to the WCF host. But my question is, how do I make that data useful? I've got a couple engines running on the windows service/WCF host but the WCF host is completely unaware of the existence of any background engines. I need the client's communications requests to be able to interact with those engines.
Does anybody have any idea of a good design pattern or methodology on how to approach facilitating communication between a WCF host and running threads?
I think that your best bet is to have some static properties or methods that can be used to interchange data between the service threads/processes and the WCF service.
Alternatively, the way that we approach this is through the use of a database where the client or wcf service queues up requests for the service to respond to and the service, when it is available, updates the database with the responses to those requests. The client then polls the database (through WCF) on a regular basis to retrieve the results of any outstanding requests.
For example, if the client needs a report generated, we fire off a request through WCF and WCF creates a report generation request in the database.
The service responsible for generating reports regularly polls this table and, when it finds a new entry, it spins off a new thread/process that generates the report.
When the report has completed (either successfully or in failure), the service updates the database table with the result.
Meanwhile, the client asks the WCF service on a regular basis if any of the submitted reports have completed yet. The WCF service in turn polls the table for any requests that have been completed, but not been delivered to the client yet, gathers the information from them, and returns them to the client.
This mechanism allows us to do a couple of things:
1) We can scale the number of services processing these requests across multiple physical/virtual machines as the workload increases.
2) A given service can support numerous clients.
3) Through the WCF interface, we can extend this support to any client platform that we choose to support (web, win, tablet, phone, etc).
Forgot to mention:
Just because we elect to use a database doesn't mean that you have to in order to implement this pattern. You can easily implement the same functionality by creating a static request collection that the WCF service and worker service access in much the same way that we use the database.
You will just need to be very careful about properly obtaining and releasing locks on the static properties to avoid cross-thread collisions or deadlocks.

2-way Cross Process Communication

I am working on a project that i want to have a plugin-sandbox like System, However i am having issues working out 2-Way Real time Cross Process Communication. At first i thought of WCF, as it can pass object Metadata, but then soon realized that the Service Client model of WCF will pose an issue. but before i lay down all my ideas and questions here is what i have planned out.
I want to have a host application that will do most of the work, let us call this host.exe, host.exe will host the main application logic for the program, as well as the launching, executing, and killing of Plugins. Plugins will be hosted via a Plugin Proxy that will host them via MEF, so we will call it proxy.exe. The proxy.exe will load plugin dlls and host them in a secluded environment that will isolate faults and if the plugin fails it will kill the proxy and not the application. The Host and the Proxy need to communicate in real time in both directions and because there are going to be multiple proxy hosts it would be best to be able to pass object data.
so that is the basic idea of what i want. I was thinking of several ways to do this. the first being WCF, however i figured that the way WCF works it would be difficult if not impossible for the server of the service to send the client a request/command. the next idea what to use TCP, and have the host be a TCP server and develop a messaging protocol that i can use to communicate, however that poses an issue as i do not have the luxury of the WCF metadata and passing complex class information would be down right insane.
Through all my research i have came up with issue after issue after issue, it would much appreciated if anyone is able to suggest a solution to this issue. Thank you.
My solution for this would likely be remoting. I dont know if WCF does this the same way. but remoting can be configured with text and servers can be setup to remote to an object at will.
I want to warn you up front. The project I am mentioning is from quite a while ago so this may be out dated information (WCF may do the same thing or it may not, My company has not required any WCF work from me.)
I remoted my objects from the client to the server. I would run the server (actually on a separate machine) then using tcp remoting, all the objects I wanted would be declared into that application.
Now here is the fun part. that remoted object used non remoted delegate objects. I would initialize the object (remoted) and the server would create it. Then I would initialize another (Interface Typed) object local and attach it to the remote object.
When the remote object wanted to communicate to me it would send serializable information to me and I would construct that into more objects or commands. Whatever was needed... (possibly more remote objects)
In any rate. One server and multiple remote objects would be sent back and forth with a CommonInterface.dll with all the standard interface objects defined in it.
This was for all intents and purposes a blind plugin setup that any application wanting to get information to or from my server would be able to implement and handle their classes as long as the interfaces matched. (with serializable command data)
If the plugin (client) crashes then the application (server) would not have to suffer. It would just wrap all communication to that plugin in a try catch and the remoted object would have some sort of time to live or ping style release mechanism.
I dont really know what your scenario is going to be like with the sandboxing but this may accomplish what you are asking.
here is a .net remoting chat server.
http://www.codeproject.com/KB/IP/dotnetchatapplication.aspx
This is the same type of project I build my first time with remoting. and I evolved it into my server plugin architecture. The difference between my use and yours is that the server was my client was the main application using the server and yours the server will be the main application allowing multiple clients to plugin.
In my opinion, I advice you use different application domains, an communicate with plug-ins using interfaces, and a real proxy object references. Do not use different processes, you can achieve plug-ins isolation through application domain isolation, because exceptions do not cross application domain boundaries unless specified.
As an alternative, you can use deprecated technologies, as .NET Remoting, for tje cusom marshaling and transparent proxy object creation.
In my opinion, WCF is too heavy and too far from real-time processing
Interprocess communication (IPC). Which maybe should called cross-process communication (CPC) is a known MS/Windows specific concept.
More about it here
In the past I've used RPC and Windows Pipes (which is used also in SQL server for transferring large data-sets/results)
You can always try another method of communication, WCF, Sockets, Pub/Sub Messaging; example, TibcoRv (which locally would bypass sockets).
I find these to be a bit of an overkill. but could be perfect for your requirement.

Disconnected Architecture With .NET

I'm working with an n-Tier application using WinForm and WCF
Engine Service (Windows Service) => WCF Service => Windows Form Client Application
The problem is that the WinForm Client Application need to be 100% available for work even if Engine Service is down.
So how can I make a disconnected architecture in order to make my winform application always available ?
Thanks.
Typically you implement a queue that's internal to your application.
The queue will forward the requests to the web service. In the event the web service is down, it stays queued. The queue mechanism should check every so often to see if the web service is alive, and when it is then forward everything it has stored up.
Alternatively, you can go direct to the web service, then simply post it to the queue in the event of initial failure. However, the queue will still need to check on the web service every so often.
EDIT:
Just to clarify, yes all of the business logic would need to be available client side. Otherwise you would need to provide a "verify" mechanism when the client connects back up.
However, this isn't a bad thing. As you should be placing the business logic in it's own assembly(ies) anyway.
Have a look at Smart Client Factory: http://msdn.microsoft.com/en-us/library/aa480482.aspx
Just to highlight the goals (this is sniped from the above link):
They have a rich user interface that
takes advantage of the power of the
Microsoft Windows desktop.
They connect to multiple back-end
systems to exchange data with them.
They present information coming from
multiple and diverse sources through
an integrated user interface, so the
data looks like it came from one
back-end system.
They take advantage of local storage
and processing resources to enable
operation during periods of no
network connectivity or intermittent
network connectivity.
They are easily deployed and
configured.
Edit
I'm going ansewr this with the usual CYA statement of it really depends. Let me give you some examples. Take an application which will watch the filesystem for files to be generated in any number of different formats (DB2, Flatfile, xml). The application will then import the files, displaying to the user a unified view of the document. And allow him to place e-commerce orders.
In this app, you could choose to detect the files zip them up and upload to the server do the transforms (applying business logic like normalization of data etc). But then what happens if the internet connection is down. Now the user has to wait for his connection before he can place his e-Commerce order.
A better solution would be to run the business rules in the client transforming the files. Now let's say, you had some business logic which would based on the order determine additional rules such as a salesman to route it to or pricing discounts...These might make sense to sit on the server.
The question you will need to ask is what functionality do I need to make my application function when the server is not there. Anything thing which falls within this category will need to be client side.
I've also never used Click Once deployment we had to roll our own updater which is a tale for another thread, but you should be able to send down updates preety easily. You could also code your business logic in an assembly, that you load from a URL, so while it runs client side it can be updated easily.
You can do all your processing off line, and use some thing like Microsoft Sync Framework to sync the data between the client and the server.
Assuming both server and client are .net, you can use same code base to do the data validation both on the server and the client. This way you will have a single code base that will serve both server and client.
You can use frameworks like CSLA.NET to simplify this validation process.

Categories