We have a asmx web service that has been around for over 10 years that is being redesigned. These changes will create cascading changes to some of the applications calling the web service. This service is deployed internally and not exposed externally. Many of the calling applications (85%) have been developed within our division. The problem is identifying the other applications
Is there any way I can retrieve the client information server-side within the service to track who is calling the service. I am not hopeful, it appears the calling applications would need to be modified to send additional information in each of the calls.
You could track it via an IP Address, you can have the hosted domain of the service track the actual IP Address. This would allow your C# Application to grab this header assigned, incredibly simple. Then you could see the frequency in which the service is called, you could also then identify the out of network addresses still calling the service. Then the IT Department would know how to either lookup or meet your criteria for said client.
Related
I am creating a client application that downloads and displays market data from Yahoo! for a university project, but that also sends out notifications to mobiles (so far using Google cloud messaging). So far it's a WPF client and the "server" is a class library - so far working. What I was wondering, is can you mix this server with a WCF service - the WCF service I was planning on using for registering devices, as well as accepting and parsing commands.
So I would call .Start() on my server object, and it will be constantly running in the background, while a WCF REST service runs alongside it - or would I be better simply having a thread running on the server that can accept input... sorry if this is confusing, but just wondering if it can, or has been done before or any advice. :)
Just to explain a bit better
The client front end and the "server" are running on the same machine - I was calling it a server because it is not only updating the front end, but sending out GCM notifications at the same time. I was wondering if maybe a WCF service could be added to make it simpler to handle adding devices to a database ("server" reads a list of device reg ids from a database, sends notifications to these) by allowing an android app to details via REST or something similiar
I would explore wrapping the class library in a Windows Service (which is essentially a process that runs continuously, and can be stopped/started/paused) and keep your WCF service as a web service for client communication.
How the WCF client service communicates with the Windows service is up to you - whether you store the data in a shared database, keep it in memory and have another WCF layer communicating between the two, etc. A shared database would be the most straightforward, especially if you want to persist the data for use by other apps/services as well.
WCF Service would be useful if you had one notification service on your server with multiple WPF client application connecting to it. If you have just one application running on the same server then not sure if it will be worth the overhead.
The usual pattern is to host WCF service in IIS, that way it always starts whenever first request is received. WCF is very flexible though, therefore you can host in in Windows Service, Console Application, etc.
We are building an asp.net web application which completely pushes the data to salesforce and is a forms authenticated website. to minimize no.of API calls to Salesforce and reduce the response time to end user in the website, when a user login, we store all the contact information in session object. But, the problem is, when some one changes information in Salesforce, how can i get to know in the asp.net web application to have the updated information queried again and update the session object.
I know there is salesforce listener we can use to have the notifications send interms of outbound messages. But, just wondering how can i manage to update my current running session object of a contact in the asp.net web application.
Your inputs are valuable to me.
If you do have access to the listener and you can use it to push events - then I think an approach like this would likely minimize events/API calls tremendously.
The remote service - SalesForce
The local service - a WCF/SOAP Kind of service
The web application - the ASP.NET app that you are referring to
The local cache - a caching system (could be filesystem, could be more elaborate)
First of all you should look into creating a very simple local service who's purpose is to receive API calls from SalesForce when data is changed. It's purpose should be to receive API calls when the data that matters to you is changed. When such a call is received, you should update a local cache with the new values. The web application should always and first check if the item that is requested is in the local cache, if not then you can allow it to make an API call to the remote service in order to retrieve data. Once data is retrieved, update local cache and display it. Therefore, from this point forward, unless data changes (which SalesForce should push changes to you and to your local cache) you should never ever have to make an API call ever again.
You could even evolve to pushing data when it is created in SalesForce and also doing a massive series of API calls to SalesForce when the new local service is in place and the remote service is properly configured. This will then give you a solution where the "internet could die" and you would still have access to the local cache and therefore data.
The only challenge here is that I don't know if SalesForce outgoing API calls can be retried easily if they fail (in case the local service does go down, or the internet does, or SalesForce is not available) in order to keep eventual consistency.
If the local cache is the Session object (which I don't recommend because it's volatile) just integrate the local service and the web application into the same umbrella (same app).
The challenges here are
Make sure changes (including creations and deletions) trigger the proper calls from the remote service to the local service
Make sure the local cache is up to date - eventual consistency should be fine as long as it only takes minutes to update it locally when changes occur - a good design should be within 30 seconds if all services are operating normally
Make sure that you can push back any changes back to SalesForce?
Don't trust the network - it will eventually fail - account for that possibility
Good luck, hope this helps
Store the values in the cache, and set the expiration time of the entry to be something low enough that when a change is made, the update will be noticed quickly enough. For you that may be a few hours, it may be less, or it could be days.
The wording of the question doesn't necessarily do the issue justice...
I've got a client UI sitting on a local box with and a background windows service to support it while it performs background functions.
The client UI is just the presentation layer and the windows service does all the hard hitting action... so there needs to be communication between the two of them. After spending a while on google and reading best practices, I decided to make the service layer using WCF and named pipes.
The client UI is the WCF client and the windows service acts as the WCF host (hosting locally only) to support the client.
So this works fine, as it should. The client UI can pass data to the WCF host. But my question is, how do I make that data useful? I've got a couple engines running on the windows service/WCF host but the WCF host is completely unaware of the existence of any background engines. I need the client's communications requests to be able to interact with those engines.
Does anybody have any idea of a good design pattern or methodology on how to approach facilitating communication between a WCF host and running threads?
I think that your best bet is to have some static properties or methods that can be used to interchange data between the service threads/processes and the WCF service.
Alternatively, the way that we approach this is through the use of a database where the client or wcf service queues up requests for the service to respond to and the service, when it is available, updates the database with the responses to those requests. The client then polls the database (through WCF) on a regular basis to retrieve the results of any outstanding requests.
For example, if the client needs a report generated, we fire off a request through WCF and WCF creates a report generation request in the database.
The service responsible for generating reports regularly polls this table and, when it finds a new entry, it spins off a new thread/process that generates the report.
When the report has completed (either successfully or in failure), the service updates the database table with the result.
Meanwhile, the client asks the WCF service on a regular basis if any of the submitted reports have completed yet. The WCF service in turn polls the table for any requests that have been completed, but not been delivered to the client yet, gathers the information from them, and returns them to the client.
This mechanism allows us to do a couple of things:
1) We can scale the number of services processing these requests across multiple physical/virtual machines as the workload increases.
2) A given service can support numerous clients.
3) Through the WCF interface, we can extend this support to any client platform that we choose to support (web, win, tablet, phone, etc).
Forgot to mention:
Just because we elect to use a database doesn't mean that you have to in order to implement this pattern. You can easily implement the same functionality by creating a static request collection that the WCF service and worker service access in much the same way that we use the database.
You will just need to be very careful about properly obtaining and releasing locks on the static properties to avoid cross-thread collisions or deadlocks.
I am on a project where I will be creating a Web service that will act as a "facade" to several stand alone systems (via APIs) and databases. The web service will be the sole method that a separate web application will use to communicate with these external resources.
I know for a fact that the communication methodology of one of the APIs that the web service must communicate with will change at some undetermined point in the future.
I expect the web service itself to abstract the details of the change in communication methodology between the Web application and the external API. My main concern is how to design the internals of the web service. What are some prescribed ways of using OO design to create an appropriate level of abstraction such that the change in communication method can be handled cleanly? Is there a recommended design pattern?
As you described, it sounds like you are already using the facade pattern here. The web service is in fact the facade to the other services. If an API between the web service and one of the external resources changes, the key is to not let this affect the API of the web service itself. Users of the web services should not need to know the internals of how the web service communicates with the external resources.
If the web service has methods doX and doY for example, none of the callers of doX and doY should care what is going on under the hood. So as long as you maintain the API between the clients of the web service and the web service, you should be set.
I've frequently faced a similar problem, where I would have a new facade (typically a Java class), and then some new "middleware" that would eventually communicate to services located somewhere else.
I would have to support multiple mediums of communication, including in-process, and via the net (often with encryption).
My usual solution is define a notion of a data packet, with its subtypes containing specific forms of data (e.g., specific responses, specific requests), etc. The important thing is that all the packets must be Serializable in some form (Java has a notion for this, I'm not sure about C++).
I then have an agent and a provider. The agent takes program-domain requests, creates packats. It moves them to a stub-skeleton that is responsible only for communicating. The remote stub takes the packet and gives it to a provider. The provider translates it back to a domain object which it then provides to the actual services. It takes the response, sends it back to the agent via the skeleton-stub, etc.
The advantage of this approach is that I create several layers of abstraction. The agent/provider are focused on domain level and its translation into packets and back. The skeleton-stub pair is responsible for marhsalling and sending packets back and forth. By swapping my skeleton-stub pair with subtypes, I can have the same program communicate in different ways (e.g., embedded in the same JVM, via something like JMS, directly via sockets, etc.)
This shouldn't affect the service you create at all (from the user's perspective). Services are about contracts - your service will provide a contract with its users - they send you a specific request and you send back a specific response. You also have a contract with this other API. If they change how they want to communicate, you can handle that internally, but as long as your contract with your users does not change they wont notice a thing.
One way to accomplish this is to not simply pass through the exact object that you get from the "real" API. You can create your own object that you send back in response. You then translate their object into your object. That way if the "real" API changes things on their end you can choose how to send that back on your end.
As the middle man you should be set up so that your end users need to know nothing about the originating API.
Hi I have an application that operations like this..
Client <----> Server <----> Monitor Web Site
WCF is used for the communication and each client has its own session on the server. This is so callbacks can be used from the server to callback to the client.
The objective is that a user on the "Monitor Website" can do the following:
a) Look at all of the users currently online - that is using the client application.
b) Select a client and then perform an action on the client.
This is a training system so the idea being the instructor using a web terminal can select his or her target client and then make the client application do something. Or maybe they want to send a message to the client that will be displayed on the clients screen.
What I cant seem to do is to store a list of all the clients in the server application, that can then be retrieved by the server. If I could do this I could then access the callback object for the client and call the appropriate method.
A method on the monitoring website would look something like this...
Service.SendMessage(userhashcode, message)
The service would then somehow look up the callback that matches the hashcode and then do something like this
callback.SendMessage(message)
So far I have tried without look to serialise the callbacks into a centralised DB. However, it doesnt seem possible on the service to serialise a remote object as the callback exists from the client.
Additionally I thought I could create a global hash table in my service but im not sure on how to do this and to make it accesible application wide.
Any help would be appreciated.
Typically, WCF services are "per-call" only, e.g. each caller gets a fresh instance of the service class, it handles the request, formats the response, send it back and then gets disposed. So typically, you don't have anything "session-like" hanging around in memory.
What you do have is not the service classes themselves, but the service host - the class that acts as the host for your service classes. This is either IIS (in that case you just need to monitor IIS), or then it's a custom app (Windows NT Service, console app) that has a ServiceHost instance up and running.
I am not aware what kind of hooks there might be to connect to and "look inside" the service host - but that's what you're really looking for, I guess.
WCF services can also be configured to be session-ful, and keep a session up and running with a service class - but again: you need to have that turned on explicitly. Even then, I'm not really sure if you have many API hooks to get "inside" the service host and have a look around the current sesssions.
Question is: do you really need to? WCF exposes a gazillion of performance counters, so you can monitor and record just about anything that goes on in WCF - wouldn't that be good enough for you?
Right now, WCF services aren't really hosted in a particularly well-designed system - this should become better with the so-called "Dublin" server-addon, which is designed to host WCF services and WF workflows and give admins a great experience monitoring and managing them. "Dublin" is scheduled to be launched shortly after .NET 4.0 becomes available (which Microsoft has promised will be before the end of calendar year 2009).
Marc
What I have done is as follows...
Created a static instance in my service that keeps a dictionary of callbacks keyed by the hashcode of each WCF connection.
When a session is created it publishes itself to a DB table which contains the hash code and additional connection information.
When a user is using the monitor web application, it can get a list of connected clients from the DB and get the hashcode for that client.
If the monitor application user wants to send a command to the client the following happens..
The hashcode for the sessionn is obtained from the db.
A method is called on the service e.g. SendTextMessage(int hashcode, string message).
This method now looks up the callback to the client from the dictionary of callbacks and obtains a reference to it.
The appropriate method in this case SendTextMessage(message) is called on the callback.
Ive tested this and it works ok, Ive also added a functionality to keep the DB table synchronised to the actual WCF sessions and to clean up as required.