I have an architecture, where there are the following constituents:
External Applicaton (EA) - Third party who makes a request to WCF Service
WCF Service (WS) - All the business logic
Pub-Sub Service (PSS) - Handles publishes and subscriptions
Internal Application (IA) - Subscribes or unsubscribes to Pub-Sub (with CallBacks)
The external application (EA) references the WCF service (WS) and makes a call to a specific method, to which all Internal Applications (IA) should be notified through the Pub-Sub Service (PSS).
The problem I have is in deciding whether it is feasible or best practice to get one WCF Service (WS) to communicate with another WCF Service (Pub-Sub Service). I've read that this is not a good idea given that requests are processed in a synchronous manner and this could cause inconsistencies in service delivery.
My specific question based on that is - can someone share pro's and con's of allowing two WCF services to talk to one another; or is this a non-issue?
Thanks
I agree with the other answers, however since a message oriented approach is not feasible with your available resources, I'll put it this way.
As long as your WCF services are acting as a client (your 'WS') and a server (your 'PSS'), it will share the pitfalls of any client-server application.
However, this assumes a couple of things:
a) Your 'WS' implements One-Way operations, or "fire and forget", towards your 'EA'. See here for reference: What You Need To Know About One-Way Calls, Callbacks, And Events. Otherwise, the 'EAs' will have to wait until your internal call to 'PSS' completes.
b) Your 'WS' channel is configured and has well and enough resources to handle the load, because One-Way operations aren't really asynchronous; if the channel can't handle the load the calls will queue up and block the client until resources are freed and execution can continue.
c) No constraints as to guaranteed, transactional or ordered delivery or any other messaging-like behavior is required.
But, a said before this sort of scenario really calls for a message based architecture. You have several points of failure and troubleshooting this chain of dependencies will be no fun.
I am totally agree with Steven. You should consider using message queue here. And the resources are below:
http://msdn.microsoft.com/en-us/library/ms751499.aspx
http://msdn.microsoft.com/en-us/library/ms731089.aspx
http://www.codeproject.com/Articles/34168/WCF-Queued-Messaging
Hope those will suffice. Thanks.
Related
Given a simple high level architecture, e.g a cloud service with a web role, and a compute role, under what circumstances would we choose to use WCF as the communication method between the web role and the compute role, rather than service bus.
There is a lot of documentation, and examples regarding service bus, but I would like to understand if there are any platform benefits to using Service Bus, rather than WCF.
Given the calls are synchronous, and short, e.g a typical API call for getting data onto the website, would you choose WCF over queuing messages and replies onto a queue?
It would appear logically that for a synchronous call WCF would offer the least amount of overhead and latency?
I don't fully understand if the platform offers any "clever" tricks to keep the service bus operating as quickly as a TCP connection over WCF, (Given the queuing overhead?) and would like to understand this further.
At the moment if I was to pick an implementation for this type of call I would choose WCF, which maybe a little naive.
Just to clear, the calls always return data, they are not long running, or fire and forget.
Thanks!
I think it depends on what specifically you want to do.
Service Bus is typically used more for what I would call constant contact type interactions. It should be more performant, but more complex to set up. It also has bi-directional communication capabilities. So you get a lot of extra flexibility out of it.
I would swap WCF for the more modern Web Api. Both solve the same core problem primarily in serving up content. I think of it as just that an API, not necessarily a platform for message passing and handling. They solve 2 different core problems.
I would actually solve the likely problem differently and use Azure Websites + WebJobs. Its the same sort of thing. You can bind the WebJob to an Azure Queue, table or blob and put messages on that storage mechanism, which the job picks up and does something with. The web role I do not believe should rely on content coming back from the job. The job may hit a SignalR Hub that you have on the AzureWeb site post completion, which pushes state back down to the affected parties.
Reference Materials:
WebJobs: https://azure.microsoft.com/en-us/documentation/articles/web-sites-create-web-jobs/
SignalR: http://signalr.net/
Azure Web Apps: https://azure.microsoft.com/en-us/services/app-service/web/
I am developing WCF application under Windows Service which is exposing one endpoint. There can be about 40 remote clients who will connect to this endpoint over local area network at the same time. My question is whether WCF can handle multiple calls to the same endpoint by queuing them? No request from any client can be lost. Is there anything special I have to consider when developing application to handle simultaneous calls?
You can choose whether the requests should be handled asynchronously or synchronously one after another.
You can set this behavior via the InstanceContextMode settings. By default WCF handles requests ByCall which means one instance of your service will be created for each incoming request. This allows you to handle multiple requests in parallel.
Alternatively you can configure your service to spin off only one instance which ensures each request is handled after the other. This effectively is the "queuing" you mentioned. You can set this behavior via InstanceContextMode.Single. By chosing this mode, your service becomes a singleton. So this mode ensures there's only one instance of your service, which may come in handy in some cases. The framework handles the queuing.
Additionally you could set ConcurrencyMode.Multiple which allows your single instance to process multiple requests in parallel (see Andrew's comment).
However, be aware that the queued requests aren't persisted in any way. So if your service gets restarted, the not yet finished requests are lost.
I'd definitely recommend to avoid any kind of singleton if possible.
Is there anything that prevents you from chosing the parallel PerCall-mode?
For more details have a look at this: http://www.codeproject.com/Articles/86007/ways-to-do-WCF-instance-management-Per-call-Per
Here are some useful links:
https://msdn.microsoft.com/en-us/library/ms752260(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/hh556230(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/system.servicemodel.servicebehaviorattribute(v=vs.110).aspx
To answer your question, no calls will be lost whatever you choose. But if you need to process them in order, you probably should use this setup for your service
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single, EnsureOrderedDispatch = true )]
Scenario :
I want to implement an MSMQ in which users input message through System.Messaging API's. And there should be a listener which always observes this queue so when ever there is a message in queue I want to make a database updation.
My first approach was to implement MSMQ trigger. I was able to implement a COM interop dll trigger. But I wasnt able to do database operation and I couldnt figure out what was wrong, I tried a lot . Then I came to know about this WCF MSMQ binding. As I am new to this WCF I have some doubts.
Which is the best approach to host WCF for this case. Is it IIS with WAS or Windows service?
And for this kind of listener service is a client necessary or can we write the database operations directly under the Service host operations without client invocation?
Then I came to know about this WCF MSMQ binding. As I am new to this
WCF I have some doubts
Well, that's valid. WCF has a fairly steep learning curve, is very config-heavy, and is not everyone's cup of tea.
However, if you're integrating to MSMQ, WCF has very good support and is rock solid in terms of the way it is implemented.
Which is the best approach to host WCF for this case. Is it IIS with
WAS or Windows service?
Unless you're hosting exclusively on a web environment, I would choose a windows service every time. Using something like topshelf, the deployment and management overhead is tiny, and there are no external dependencies.
Remember to use msmqIntegrationBinding rather than netMsmqBinding, as the latter relies on WCF at both ends, whereas the former supports System.Messaging on the client, or even better, use WCF on the client and then you can use netMsmqBinding which support "typed" messages.
The wording of the question doesn't necessarily do the issue justice...
I've got a client UI sitting on a local box with and a background windows service to support it while it performs background functions.
The client UI is just the presentation layer and the windows service does all the hard hitting action... so there needs to be communication between the two of them. After spending a while on google and reading best practices, I decided to make the service layer using WCF and named pipes.
The client UI is the WCF client and the windows service acts as the WCF host (hosting locally only) to support the client.
So this works fine, as it should. The client UI can pass data to the WCF host. But my question is, how do I make that data useful? I've got a couple engines running on the windows service/WCF host but the WCF host is completely unaware of the existence of any background engines. I need the client's communications requests to be able to interact with those engines.
Does anybody have any idea of a good design pattern or methodology on how to approach facilitating communication between a WCF host and running threads?
I think that your best bet is to have some static properties or methods that can be used to interchange data between the service threads/processes and the WCF service.
Alternatively, the way that we approach this is through the use of a database where the client or wcf service queues up requests for the service to respond to and the service, when it is available, updates the database with the responses to those requests. The client then polls the database (through WCF) on a regular basis to retrieve the results of any outstanding requests.
For example, if the client needs a report generated, we fire off a request through WCF and WCF creates a report generation request in the database.
The service responsible for generating reports regularly polls this table and, when it finds a new entry, it spins off a new thread/process that generates the report.
When the report has completed (either successfully or in failure), the service updates the database table with the result.
Meanwhile, the client asks the WCF service on a regular basis if any of the submitted reports have completed yet. The WCF service in turn polls the table for any requests that have been completed, but not been delivered to the client yet, gathers the information from them, and returns them to the client.
This mechanism allows us to do a couple of things:
1) We can scale the number of services processing these requests across multiple physical/virtual machines as the workload increases.
2) A given service can support numerous clients.
3) Through the WCF interface, we can extend this support to any client platform that we choose to support (web, win, tablet, phone, etc).
Forgot to mention:
Just because we elect to use a database doesn't mean that you have to in order to implement this pattern. You can easily implement the same functionality by creating a static request collection that the WCF service and worker service access in much the same way that we use the database.
You will just need to be very careful about properly obtaining and releasing locks on the static properties to avoid cross-thread collisions or deadlocks.
I am on a project where I will be creating a Web service that will act as a "facade" to several stand alone systems (via APIs) and databases. The web service will be the sole method that a separate web application will use to communicate with these external resources.
I know for a fact that the communication methodology of one of the APIs that the web service must communicate with will change at some undetermined point in the future.
I expect the web service itself to abstract the details of the change in communication methodology between the Web application and the external API. My main concern is how to design the internals of the web service. What are some prescribed ways of using OO design to create an appropriate level of abstraction such that the change in communication method can be handled cleanly? Is there a recommended design pattern?
As you described, it sounds like you are already using the facade pattern here. The web service is in fact the facade to the other services. If an API between the web service and one of the external resources changes, the key is to not let this affect the API of the web service itself. Users of the web services should not need to know the internals of how the web service communicates with the external resources.
If the web service has methods doX and doY for example, none of the callers of doX and doY should care what is going on under the hood. So as long as you maintain the API between the clients of the web service and the web service, you should be set.
I've frequently faced a similar problem, where I would have a new facade (typically a Java class), and then some new "middleware" that would eventually communicate to services located somewhere else.
I would have to support multiple mediums of communication, including in-process, and via the net (often with encryption).
My usual solution is define a notion of a data packet, with its subtypes containing specific forms of data (e.g., specific responses, specific requests), etc. The important thing is that all the packets must be Serializable in some form (Java has a notion for this, I'm not sure about C++).
I then have an agent and a provider. The agent takes program-domain requests, creates packats. It moves them to a stub-skeleton that is responsible only for communicating. The remote stub takes the packet and gives it to a provider. The provider translates it back to a domain object which it then provides to the actual services. It takes the response, sends it back to the agent via the skeleton-stub, etc.
The advantage of this approach is that I create several layers of abstraction. The agent/provider are focused on domain level and its translation into packets and back. The skeleton-stub pair is responsible for marhsalling and sending packets back and forth. By swapping my skeleton-stub pair with subtypes, I can have the same program communicate in different ways (e.g., embedded in the same JVM, via something like JMS, directly via sockets, etc.)
This shouldn't affect the service you create at all (from the user's perspective). Services are about contracts - your service will provide a contract with its users - they send you a specific request and you send back a specific response. You also have a contract with this other API. If they change how they want to communicate, you can handle that internally, but as long as your contract with your users does not change they wont notice a thing.
One way to accomplish this is to not simply pass through the exact object that you get from the "real" API. You can create your own object that you send back in response. You then translate their object into your object. That way if the "real" API changes things on their end you can choose how to send that back on your end.
As the middle man you should be set up so that your end users need to know nothing about the originating API.