I am trying to configure a WCF service with the OracleDBBinding to get data from the Oracle db every x minutes. This polling is automated thanks to the binding configuration. This WCF service will run on a remote server.
The problem is how do I get the data that the remote WCF service obtains back to BizTalk?
Is there a particular configuration in the WCF service to enable this?
Do I just create a WCF-basicHTTP receive location in BizTalk and point the URL to the remote WCF service?
Does the remote WCF service exists with the sole purpose of polling the Oracle db and pushing to BizTalk?
I would let BizTalk poll the Oracle db directly (receive location with OracleDBBinding) and then send the data to the remote WCF service, if needed.
It would be optimal to just have BizTalk poll the Oracle DB directly, as others have mentioned. I'd push back on your infrastructure team to find a way to make that happen. However, if you truly can't get it, you could set up your remote WCF service to call out to a listener end point on the BizTalk machine. To minimize latency/communication overhead, you could use a NetTCP receive location on the BizTalk side. Set this up in the standard way from BizTalk, using the WCF Service Wizard to create an IIS application etc. - see https://msdn.microsoft.com/en-us/library/bb728041.aspx). Your intermediate service would be a client of this service. The sequence would be something like:
Remote WCF service polls data from Oracle
If data is received, it connects tot he NetTCP listener in BizTalk
It then sends data to BizTalk
BizTalk maps the data from there.
Note that this will not end up using MSDTC for the whole transaction - if your intermediate service throws an exception, BizTalk will have no awareness/insight to that, nor will any BizTalk monitoring tools you might be using. (That's part of why it'd be best to just have BizTalk poll directly.)
As far as BizTalk is concerned, this data isn't coming from Oracle, it's coming from the intermediate WCF service. You could, of course, use a schema that looks like how the Oracle adapter would write the XML - but that wouldn't be required.
This is assuming your remote service is already doing the polling on its own (using a System.Threading.Timer or something of the like) against Oracle, and you're all set with that. If you need BizTalk to fire the polling event, there's nothing out of the box that can handle that. You could try using the Scheduled Task adapter (maybe do an HTTP POST to that service?), but that seems like it'd be even more complicated and error prone to me. You could also set up a SQL polling adapter that just always returns true (e.g. having a pollingdata available statement like SELECT 1) and publishes a message that another port listens to and forwards to this service - but again, that's very messy/hacky and wouldn't be a good idea.
If you are retrieving data from Oracle for use in a BizTalk app, then you should have BizTalk Poll the data directly. Meaning, you cannot use a 'remote' WCF Service.
The oracleDbBinding supports Polling for this exact purpose.
Related
Currently I'm working on a design for a Windows Service application to fetch reports from an Oracle database, aggregate them to a message and send it to an external WCF SOAP service.
I would be grateful for some design suggestions concerning Windows services.
Should Windows Services use e.g. dedicated WAS/self-hosted WCF service (net.pipe/net.tcp) that provides data to achieve better separation / reusability?
So I would add a WCF service (net.pipe) that provides data (e.g. a GetReport method).
The Windows Service application would call GetReport and call the remote SOAP service to forward the aggregated message. The remote service and its client code are likely to change. It might be adapted for different customer projects.
If I understand correctly, your windows service will periodically fetch some data from the database and upload that data to a remote web service.
This means that your windows service is a client in terms of WCF communication and you won't need to implement any WCF server code inside it.
All you'll have to do is to connect to the remove web service and upload the data, e.g. using a client proxy generated for this remove service.
I don't think that it is required to add another WCF service that provides the data instead of querying the database directly as long as you don't have the requirement that another application will use the same WCF service. Until then I wouldn't add the service for the following reasons:
Another WCF service increases the complexity of the deployment and makes it harder to install and configure.
The connection to the new WCF service is another point that can break.
If you handle lots of data, getting them from the database directly is much more efficient instead of transferring them over a service protocol. As I understand your question, you aggregate the data in the windows service not in the database. Therefore you'd have to move the aggregation code to the new service also.
As said before, this recommendation will change once you have another potential client to the new service. In order to prepare for that, you should of course choose a design in your windows service that separates concerns well and is a good starting point to move some components later.
I am creating a client application that downloads and displays market data from Yahoo! for a university project, but that also sends out notifications to mobiles (so far using Google cloud messaging). So far it's a WPF client and the "server" is a class library - so far working. What I was wondering, is can you mix this server with a WCF service - the WCF service I was planning on using for registering devices, as well as accepting and parsing commands.
So I would call .Start() on my server object, and it will be constantly running in the background, while a WCF REST service runs alongside it - or would I be better simply having a thread running on the server that can accept input... sorry if this is confusing, but just wondering if it can, or has been done before or any advice. :)
Just to explain a bit better
The client front end and the "server" are running on the same machine - I was calling it a server because it is not only updating the front end, but sending out GCM notifications at the same time. I was wondering if maybe a WCF service could be added to make it simpler to handle adding devices to a database ("server" reads a list of device reg ids from a database, sends notifications to these) by allowing an android app to details via REST or something similiar
I would explore wrapping the class library in a Windows Service (which is essentially a process that runs continuously, and can be stopped/started/paused) and keep your WCF service as a web service for client communication.
How the WCF client service communicates with the Windows service is up to you - whether you store the data in a shared database, keep it in memory and have another WCF layer communicating between the two, etc. A shared database would be the most straightforward, especially if you want to persist the data for use by other apps/services as well.
WCF Service would be useful if you had one notification service on your server with multiple WPF client application connecting to it. If you have just one application running on the same server then not sure if it will be worth the overhead.
The usual pattern is to host WCF service in IIS, that way it always starts whenever first request is received. WCF is very flexible though, therefore you can host in in Windows Service, Console Application, etc.
I read that Signalr on Azure requires a service bus implementation (e.g. https://github.com/SignalR/SignalR/wiki/Azure-service-bus) for scalability purpose.
However, my server only makes callbacks to a single client (the caller):
// Invoke a method on the calling client
Caller.addMessage(data);
If don't need Signalr's broadcasting functionality, is an underlaying service bus still necessary?
The Service Bus dependency is not something specific to Azure. Any time you have multiple servers in play, some of your signalR clients will have created their connection to a specific server. If you want to keep multiple servers in sync something needs to handle the server to server real time communication. The pub-sub model of service bus lines up with this requirement quite well.
dfowleR lists a specific case of this in the comments. Make sure you read down that far!
If you are running on a single server (without the sla on Azure) signalR will work just fine on a Cloud Service Web Role as well as the new Azure Web Sites. I did a screencast on this simple scenario that does not take on a service bus dependency, but only runs on a single server.
In order to support the load balance scenario, is it possible to enstablish a "server to server" SignalR PersistConnection between multiple instances (ie on Azure) ?
If so, we can use a SQL Azure Table where all instances register at startup, so newest can connect to previous ones.
I am trying to write a monitoring tool to monitor some information
It will gonna work on azure normally. So i gonna host the database on azure also the webservice will be hosted at azure.
On the client's i read from the config file how many time's he need to update the information to the azure database ( with the webservice on azure ).
Now i want to send also some commands to the client itself. Like start service, .... what is the best way to do that?
How can i send it from a website that is hosted on the azure platform?
I think you should consider implementing a WCF service at the client as well. The Azure side of your software could call operations from this service when it needs to instruct the client to do something.
The WCF service at the client should be something simple,hosted in a Windows Service or in your actual client (whatever it is... win forms, console, etc).
Since you have no VPN, it sounds like you may have a problem with hosting a WCF service on the client. If the client is behind a firewall, you would have to modify the firewall configuration to allow your server to connect to this service.
Last time I had to do a service like this, I used Comet. The server maintains a queue of messages to be sent to the client. Your client connects to the web service and requests any available messages. If messages are available, the server returns them. If not, the server leaves the request open for some time. As soon as a message arrives, the server sends it down the already-open connection. The client will either periodically time out/reconnect or send a keep-alive message (perhaps once per minute) in order to keep the connection alive in the intervening firewalls.
I am looking to write an application that will take client data from a database, transfer it to our server application, manipulate that data and then pass it back to the client. I would like this to be as seamless as possible and as secure as possible. Also, the manipulation part of this could take several hours. The format of the data will be different for each client. To make the application as easy to maintain as possible, the simplest solution too.
What methods would people recommend for achieving this?
WCF will make your implementation easy. It looks like you are wanting to have the client -> server -> client communication asynchronous, since the server process can take hours, you don't want to block your client that long.
You probably want to define a server WCF service contract to allow the clients to load data to the server. You also want a client side WCF service contract that the server can use to send the results back when the processing is completed. OR you can have the server send a small message to the client WCF service telling it that "results are ready, come and get them when you are ready". You will need to coordinate this with some type of ID; the server tells the client to use this id when it wants to collect the results.
Have a look for duplex, asynchronous and peer-to-peer communication topics in WCF. There should be plenty of examples if you google around.
Please take a look at WCF framework of .NET 4.0.
1/backup the database in prod
2/download it,
3/restore in local
4/modify,
5/backup in local
6/upload
7/restore in prod
to download in https or sftp, with ip restriction
you can compress the database before download it too