Reuse of WCF service clients - c#

I have a WCF webservice that acts as a data provider for my ASP.NET web page.
Throughout the web page a number of calls are made to the web service via the auto-generated ServiceClient.
Currently I create a new ServiceClient and open it for each request i.e. Get Users, Get Roles, Get Customer list etc.... Each one of these would create a new ServiceClient and open a new connection.
Can I make my ServiceClient class a global or statically available class so that all functions within my ASP.NET web page can use the same client. This would seem to be far more efficient. Are there any issues with doing it this way? Any advice I should take into account when doing this?
What happens if I make multiple requests to a client? Presumably it is all synchronous so it shouldn't matter if I make 1 or 50 calls to it?
Thanks

When session (wsHttp with security context or reliable session) or connection (net.tcp, net.pipe) oriented binding is used you have to handle your proxy in the way you want to handle the session. So if you share the proxy, all calls will be handled in single WCF session (by default handled by single service instance). But you have to deal with additional complexity like: Unhandled service exception will terminate your channel and following call from client will result in exception.
When session-less HTTP binding (basicHttp, webHttp) is used you can share your proxy or even make it static. Each call is handled separately, exception on a service will not fault the channel and it transparently reuses opened HTTP persistent connections. But because of this, there should be no big overhead to creating new proxy / channel.
So my suggestion is: When you need several calls to your service in single request processing in your ASP.NET application, use the same proxy / channel. But don't share proxy / channel among different requests.

I think using a ChannelFactory could take of your problem. If I'm right the ChannelFactory has a pool of your connection and re-uses the channels. The advantage of this is that the channels don't get instatiated each time, only the first.
Read more here: ChannelFactory
To handle the disposing of the channels you need some special handling since the channel can throw exception in dispose. I wrote a mapper to handle this, you can read about it here: http://blog.tomasjansson.com/2010/12/disposible-wcf-client-wrapper/

Related

Can WCF handle simultaneous calls to the same endpoint?

I am developing WCF application under Windows Service which is exposing one endpoint. There can be about 40 remote clients who will connect to this endpoint over local area network at the same time. My question is whether WCF can handle multiple calls to the same endpoint by queuing them? No request from any client can be lost. Is there anything special I have to consider when developing application to handle simultaneous calls?
You can choose whether the requests should be handled asynchronously or synchronously one after another.
You can set this behavior via the InstanceContextMode settings. By default WCF handles requests ByCall which means one instance of your service will be created for each incoming request. This allows you to handle multiple requests in parallel.
Alternatively you can configure your service to spin off only one instance which ensures each request is handled after the other. This effectively is the "queuing" you mentioned. You can set this behavior via InstanceContextMode.Single. By chosing this mode, your service becomes a singleton. So this mode ensures there's only one instance of your service, which may come in handy in some cases. The framework handles the queuing.
Additionally you could set ConcurrencyMode.Multiple which allows your single instance to process multiple requests in parallel (see Andrew's comment).
However, be aware that the queued requests aren't persisted in any way. So if your service gets restarted, the not yet finished requests are lost.
I'd definitely recommend to avoid any kind of singleton if possible.
Is there anything that prevents you from chosing the parallel PerCall-mode?
For more details have a look at this: http://www.codeproject.com/Articles/86007/ways-to-do-WCF-instance-management-Per-call-Per
Here are some useful links:
https://msdn.microsoft.com/en-us/library/ms752260(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/hh556230(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/system.servicemodel.servicebehaviorattribute(v=vs.110).aspx
To answer your question, no calls will be lost whatever you choose. But if you need to process them in order, you probably should use this setup for your service
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single, EnsureOrderedDispatch = true )]

Self connection via WCF netTCPBinding

I have a windows application which hosts two services as netTCPBinding and also has some client dialogs.
one of the services is duplex. When i run two different instances of my software (one as server and one as client) there will be no problem.
However, when i run only one instance as server and client (in tandem), the duplex service does not work. The problem happens on Subscribe() method call. after timeout exception, Subscribe() method of host will be invoked.
Do you have any idea how to solve this?
There's not enough information in your question to provide a detailed answer, and I'm not sure but I'll give it a try anyway.
I bet your problem lies in the reentrancy behavior: Just mark your service implementation with the following:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Reentrant)]
This will allow incoming calls from the same endpoint while you're processing a request.
The problem was not because of the WCF. It was because of the StreamInsight. If you are using WCF based sinks in embedded StreamInsight scenarios note that the sink will not be generated until an event comes into your query. In my case, it was not possible to connect to sink at first before sending data to source.

How many HttpClients should I create?

Originally my code created a new HttpClient in a using statement on every request. Then I read several articles about reusing HttpClient to increase performance.
Here is an excerpt from one such article:
I do not recommend creating a HttpClient inside a Using block to make a
single request. When HttpClient is disposed it causes the underlying
connection to be closed also. This means the next request has to
re-open that connection. You should try and re-use your HttpClient
instances.
http://www.bizcoder.com/httpclient-it-lives-and-it-is-glorious
It seems to me that leaving a connection open is only going to be useful if multiple requests in a row go to the same places - such as www.api1.com.
My question is, how may HttpClients should I create?
My website talks to about ten different services on the back end.
Should I create a single HttpClient for all of them to consume, or should I create a separate HttpClient per domain that I use on the back end?
Example:
If I talk to www.api1.com and www.api2.com, should I create 2 distinct HttpClients, or only a single HttpClient?
Indeed, Disposing of HttpClient will not forcibly close the underlying TCP/IP connection from the connection pool. Your best performance scenario is what you have suggested:
Keep an instance of HttpClient alive for each back-end service you need to connect to or the lifetime of your application.
Depending on the details you have about the back-end service, you may also want to have a client for each distinct API on that back-end service as well. (API's in the same domain could be routing all over the place.)

Understanding Asp.net web service

Can you please answer to the following questions to enlighten me about web services.
What is the lifecycle of web service ? When the class which represents my web service gets instanced and when it's start running (executing) ?
Is it there new instance created for every webMethod call ? And what happens if there are multiple simultaneous requests for same or different web method ?
When to open connection to remote resource, that the onnection is ready before any requests. And this connection must persist through whole lifetime of web service.
Thank you in advace for all answers.
Webservices are nothing more than ASP.NET pages communicating on the SOAP protocol (XML over HTTP). Each method have its own round-trip (like a page, so new instances are created by default). ASP.NET thread pool is used for running a webservice. As web programmer you don't have lot of control over how thread pool is used since it depends on many external factors (system resources, concurrent page requests...).
If you mean database connections by 'Opening connection to remote resources' these connections also are pooled by Connection Pool of ADO.NET and it will be managed automatically. If you external resources are heavy use Singleton webservice model and load external resources in constructor. Don't use singleton patteron on a database connection (It has its own pooling mechanism). You should take care of concurrency issues for your static variables if you are choosing Singleton pattern.
At the end I should say living in managed-world of programming is easier than ever. Most of the time somebody is caring about our doubts.
That depends; You have two instancation models.
"Single Call" (an instance is created for each call made to the service)
"Singleton" (an instance is created on the first call and reused as long as the process remains alive).
See answer 1; Eleboration; Yes, each call get's its own instance
I would seperate that away from the actual Web Service class. You can use another singleton approach to achieve this functionality.
Hope this helps,

How to store WCF sessions so another application can access them

Hi I have an application that operations like this..
Client <----> Server <----> Monitor Web Site
WCF is used for the communication and each client has its own session on the server. This is so callbacks can be used from the server to callback to the client.
The objective is that a user on the "Monitor Website" can do the following:
a) Look at all of the users currently online - that is using the client application.
b) Select a client and then perform an action on the client.
This is a training system so the idea being the instructor using a web terminal can select his or her target client and then make the client application do something. Or maybe they want to send a message to the client that will be displayed on the clients screen.
What I cant seem to do is to store a list of all the clients in the server application, that can then be retrieved by the server. If I could do this I could then access the callback object for the client and call the appropriate method.
A method on the monitoring website would look something like this...
Service.SendMessage(userhashcode, message)
The service would then somehow look up the callback that matches the hashcode and then do something like this
callback.SendMessage(message)
So far I have tried without look to serialise the callbacks into a centralised DB. However, it doesnt seem possible on the service to serialise a remote object as the callback exists from the client.
Additionally I thought I could create a global hash table in my service but im not sure on how to do this and to make it accesible application wide.
Any help would be appreciated.
Typically, WCF services are "per-call" only, e.g. each caller gets a fresh instance of the service class, it handles the request, formats the response, send it back and then gets disposed. So typically, you don't have anything "session-like" hanging around in memory.
What you do have is not the service classes themselves, but the service host - the class that acts as the host for your service classes. This is either IIS (in that case you just need to monitor IIS), or then it's a custom app (Windows NT Service, console app) that has a ServiceHost instance up and running.
I am not aware what kind of hooks there might be to connect to and "look inside" the service host - but that's what you're really looking for, I guess.
WCF services can also be configured to be session-ful, and keep a session up and running with a service class - but again: you need to have that turned on explicitly. Even then, I'm not really sure if you have many API hooks to get "inside" the service host and have a look around the current sesssions.
Question is: do you really need to? WCF exposes a gazillion of performance counters, so you can monitor and record just about anything that goes on in WCF - wouldn't that be good enough for you?
Right now, WCF services aren't really hosted in a particularly well-designed system - this should become better with the so-called "Dublin" server-addon, which is designed to host WCF services and WF workflows and give admins a great experience monitoring and managing them. "Dublin" is scheduled to be launched shortly after .NET 4.0 becomes available (which Microsoft has promised will be before the end of calendar year 2009).
Marc
What I have done is as follows...
Created a static instance in my service that keeps a dictionary of callbacks keyed by the hashcode of each WCF connection.
When a session is created it publishes itself to a DB table which contains the hash code and additional connection information.
When a user is using the monitor web application, it can get a list of connected clients from the DB and get the hashcode for that client.
If the monitor application user wants to send a command to the client the following happens..
The hashcode for the sessionn is obtained from the db.
A method is called on the service e.g. SendTextMessage(int hashcode, string message).
This method now looks up the callback to the client from the dictionary of callbacks and obtains a reference to it.
The appropriate method in this case SendTextMessage(message) is called on the callback.
Ive tested this and it works ok, Ive also added a functionality to keep the DB table synchronised to the actual WCF sessions and to clean up as required.

Categories