Should You Have a Single HTTP Client for Web App/API? - c#

I've read that the general rule of thumb is to create a single HTTP Client instance to be used by an application for it's life cycle vs. creating an instance and disposing it between requests.
HttpClient is intended to be instantiated once and reused throughout the life of an application.
The above is per Microsoft's documentation for the HTTP Client. This makes sense to me in the context of something such as a console app that fires up, executes some code, and then quits.
Is the same principal still the best practice when making HTTP Requests in a Web App/API? Not sure if this would fit the same scenario since the 'life cycle' of the app would be indefinite as long as it is up and online.
If so, are there specific best practices for use in a Web App? Thinking of things such as making it a static instance (if so where should it live), putting it in a service class, etc.

Related

Persist object in C# .NET Web Service

Am stuck on what am sure is a fundamental and easy to solve problem in WCF, just need to be guided towards the right way.
I have a large object (which is actually a trained text classifier) that I need to expose through a Web Service in C# .NET. The classifier object can be loaded from disk when the service initially starts, but I don't want to keep loading it from disk for every request (the object that services requests currently occupies about 6 GB in memory, and it takes a while to keep loading it from disk for every request), so instead I want to persist that object in memory throughout all requests to that web service, and the object should only be loaded when the service starts (instead of loading it when the first web request triggers it).
How would I go about doing that?
Thanks for any help!
Probably the easiest way is to create your service as a singleton. This involves specifying InstanceContextMode = InstanceContextMode.Single in a ServiceBehavior attribute on your service class definition.
However it is very questionable if sending a 6GB object over the wire using WCF is advisable. You can run into all sorts of service availability issues with this approach.
Additionally, singletons are not scalable within a host (can be only one instance per host), although you can host multiple singleton services and then load-balance the requests.
The way I've done this in projects that I've had the problem with in the past is to self host the WCF service inside a Windows Service.
I've then set the data storage object up inside the service as a singleton that persists for the life of the service. Each WCF service call then gets the singleton each time it needs to do something with the data.
I would avoid running in IIS simply because you don't have direct control of the service's lifetime and therefore don't have enough control of when things are disposed and instantiated.

Website Calling Windows Service With Callback

I have a WCF service hosted in a Windows Service. I want a website to be able to call it asynchronously and then when the work is finished the WCF service will let the website know the result. I've looked at various ways of achieving this but I would like to get some more advice on which way would be best. I've looked into using callbacks but also read they can be unreliable. I've read about not doing it this way at all and just having another interface in my service for the website to query the status. I've looked at using MSMQ which at the moment looks like my preferred way forward but would like some more info on how to set this up or whether I shouldn't do it this way.
Does anyone have any advice please?
The nature of any communication on a network is unreliable. The statement:
I've looked into using callbacks but also read they can be unreliable
Assuming you mean WCF callbacks, they are as unreliable as the clients/servers themselves, they all use the same mechanism.
That said, you can store the client of your WCF service in the HttpApplicationState (if the call is application-wide) or HttpSessionState (if the call is local to a session).
When generating the proxy, make sure that you check the option (or specify on the contract) that you are using asynchronous calls.
Then, you would make the call, using a callback (delegate) to indicate when the async call completed.
When the call completes, you would then store the result in the session state.
If this is something that a client on the front end needs to be aware of, then the browser will have to poll your site, checking for the return result, redirecting to a page that can display the results when the result is populated.
Selecting a binding for your application depends on
Architecture of your application
Requirements
interoperability required or not.
response time of the application
availability of time to implement
Infrastructure you are using or want to use.
As your application is a web application and is built on a request/response model, you will not be able to use asyncronous or msmq style for this architecture(or is not adviceable), because there will not be any thread listining for a delayed async response or msmq call.
you can make use of one way Methods and direct calls to methods. in this case to reduce response time you have to device ways to optimize your service methods and the processing it is doing.

Reuse of WCF service clients

I have a WCF webservice that acts as a data provider for my ASP.NET web page.
Throughout the web page a number of calls are made to the web service via the auto-generated ServiceClient.
Currently I create a new ServiceClient and open it for each request i.e. Get Users, Get Roles, Get Customer list etc.... Each one of these would create a new ServiceClient and open a new connection.
Can I make my ServiceClient class a global or statically available class so that all functions within my ASP.NET web page can use the same client. This would seem to be far more efficient. Are there any issues with doing it this way? Any advice I should take into account when doing this?
What happens if I make multiple requests to a client? Presumably it is all synchronous so it shouldn't matter if I make 1 or 50 calls to it?
Thanks
When session (wsHttp with security context or reliable session) or connection (net.tcp, net.pipe) oriented binding is used you have to handle your proxy in the way you want to handle the session. So if you share the proxy, all calls will be handled in single WCF session (by default handled by single service instance). But you have to deal with additional complexity like: Unhandled service exception will terminate your channel and following call from client will result in exception.
When session-less HTTP binding (basicHttp, webHttp) is used you can share your proxy or even make it static. Each call is handled separately, exception on a service will not fault the channel and it transparently reuses opened HTTP persistent connections. But because of this, there should be no big overhead to creating new proxy / channel.
So my suggestion is: When you need several calls to your service in single request processing in your ASP.NET application, use the same proxy / channel. But don't share proxy / channel among different requests.
I think using a ChannelFactory could take of your problem. If I'm right the ChannelFactory has a pool of your connection and re-uses the channels. The advantage of this is that the channels don't get instatiated each time, only the first.
Read more here: ChannelFactory
To handle the disposing of the channels you need some special handling since the channel can throw exception in dispose. I wrote a mapper to handle this, you can read about it here: http://blog.tomasjansson.com/2010/12/disposible-wcf-client-wrapper/

Understanding Asp.net web service

Can you please answer to the following questions to enlighten me about web services.
What is the lifecycle of web service ? When the class which represents my web service gets instanced and when it's start running (executing) ?
Is it there new instance created for every webMethod call ? And what happens if there are multiple simultaneous requests for same or different web method ?
When to open connection to remote resource, that the onnection is ready before any requests. And this connection must persist through whole lifetime of web service.
Thank you in advace for all answers.
Webservices are nothing more than ASP.NET pages communicating on the SOAP protocol (XML over HTTP). Each method have its own round-trip (like a page, so new instances are created by default). ASP.NET thread pool is used for running a webservice. As web programmer you don't have lot of control over how thread pool is used since it depends on many external factors (system resources, concurrent page requests...).
If you mean database connections by 'Opening connection to remote resources' these connections also are pooled by Connection Pool of ADO.NET and it will be managed automatically. If you external resources are heavy use Singleton webservice model and load external resources in constructor. Don't use singleton patteron on a database connection (It has its own pooling mechanism). You should take care of concurrency issues for your static variables if you are choosing Singleton pattern.
At the end I should say living in managed-world of programming is easier than ever. Most of the time somebody is caring about our doubts.
That depends; You have two instancation models.
"Single Call" (an instance is created for each call made to the service)
"Singleton" (an instance is created on the first call and reused as long as the process remains alive).
See answer 1; Eleboration; Yes, each call get's its own instance
I would seperate that away from the actual Web Service class. You can use another singleton approach to achieve this functionality.
Hope this helps,

OO Design for communication methodology that will change

I am on a project where I will be creating a Web service that will act as a "facade" to several stand alone systems (via APIs) and databases. The web service will be the sole method that a separate web application will use to communicate with these external resources.
I know for a fact that the communication methodology of one of the APIs that the web service must communicate with will change at some undetermined point in the future.
I expect the web service itself to abstract the details of the change in communication methodology between the Web application and the external API. My main concern is how to design the internals of the web service. What are some prescribed ways of using OO design to create an appropriate level of abstraction such that the change in communication method can be handled cleanly? Is there a recommended design pattern?
As you described, it sounds like you are already using the facade pattern here. The web service is in fact the facade to the other services. If an API between the web service and one of the external resources changes, the key is to not let this affect the API of the web service itself. Users of the web services should not need to know the internals of how the web service communicates with the external resources.
If the web service has methods doX and doY for example, none of the callers of doX and doY should care what is going on under the hood. So as long as you maintain the API between the clients of the web service and the web service, you should be set.
I've frequently faced a similar problem, where I would have a new facade (typically a Java class), and then some new "middleware" that would eventually communicate to services located somewhere else.
I would have to support multiple mediums of communication, including in-process, and via the net (often with encryption).
My usual solution is define a notion of a data packet, with its subtypes containing specific forms of data (e.g., specific responses, specific requests), etc. The important thing is that all the packets must be Serializable in some form (Java has a notion for this, I'm not sure about C++).
I then have an agent and a provider. The agent takes program-domain requests, creates packats. It moves them to a stub-skeleton that is responsible only for communicating. The remote stub takes the packet and gives it to a provider. The provider translates it back to a domain object which it then provides to the actual services. It takes the response, sends it back to the agent via the skeleton-stub, etc.
The advantage of this approach is that I create several layers of abstraction. The agent/provider are focused on domain level and its translation into packets and back. The skeleton-stub pair is responsible for marhsalling and sending packets back and forth. By swapping my skeleton-stub pair with subtypes, I can have the same program communicate in different ways (e.g., embedded in the same JVM, via something like JMS, directly via sockets, etc.)
This shouldn't affect the service you create at all (from the user's perspective). Services are about contracts - your service will provide a contract with its users - they send you a specific request and you send back a specific response. You also have a contract with this other API. If they change how they want to communicate, you can handle that internally, but as long as your contract with your users does not change they wont notice a thing.
One way to accomplish this is to not simply pass through the exact object that you get from the "real" API. You can create your own object that you send back in response. You then translate their object into your object. That way if the "real" API changes things on their end you can choose how to send that back on your end.
As the middle man you should be set up so that your end users need to know nothing about the originating API.

Categories