If I open some DB connection in some global viable in one call of the web service's method and if concurrently at second call of this method will it see this instance in this global viable ? Are this resources shared or each call has it's own resources ?
Thanks
Global variables tend to be just that, global. If your global variable is a C# static, it will be shared by webservice methods in the AppDomain. This is obviously error-prone - It is better, if each webservice method obtains a new connection when needed, and close it before the method finishes.
Usually Web service use Http request.
In this case it's possible that each call you have to define the object, because the service are stateless...
For services it is better to use some kind of database connection management. Usually you can adapt Open/Close new connection on per-request basis. Note, most likely you will be working with logical connections and connection pool. Those are helpful to significantly reduce load to open physical connection. Physical connection is created without your direct control and is really heavy weight operation.
Do not put connection in a shared static variable, because connection is typically a disposable resource, which means you must dispose it. If something goes wrong and your connection to the database is corrupted then all your subsequent calls are deemed in doom.
Related
Followed the below link to implement Redis Cache in Web API.
https://learn.microsoft.com/en-us/azure/redis-cache/cache-dotnet-how-to-use-azure-redis-cache
The Cache works fine for the first time but fails with the error
"Cannot access a disposed object"
...on the subsequent reads.
As mentioned in the above blog, I am disposing the connection at the end of the method and invoking the method again throwing the above exception:
lazyConnection.Value.Dispose();
Also tried to encapsulate the connection attributes in a different class, as mentioned here. But as they are declared static, the same value retain across all the instances and so when disposing the connection leads to the same exception on the subsequent call.
https://www.c-sharpcorner.com/article/using-redis-cache-in-web-api/
There are a couple of ways I can fix this:
Do not dispose the connection and reuse the same connection for all the calls.
Make the Cache connection non-static, so that a new connection gets created and disposed for every call.
What is the right way of doing this?
You should not create a connection on every call, that will be very inefficient.
A static connection should also be avoided if possible. It can make unit tests harder to write and prevents you having multiple connections within the same process.
I am new to WCF & Service development and have a following question.
I want to write a service which relies on some data (from database for example) in order to process client requests and reply back.
I do not want to look in database for every single call. My question is, is there any technique or way so that I can load such data either upfront or just once, so that it need not go to fetch this data for every request?
I read that having InstanceContextMode to Single can be a bad idea (not exactly sure why). Can somebody explain what is the best way to deal with such situation.
Thanks
The BCL has a Lazy class that is made for this purpose. Unfortunately, in case of a transient exception (network issue, timeout, ...) it stores the exception forever. This means that your service is down forever if that happens. That's unacceptable. The Lazy class is therefore unusable. Microsoft has declared that they are unwilling to fix this.
The best way to deal with this is to write your own lazy or use something equivalent.
You also can use LazyInitializer. See the documentation.
I don't know how instance mode Single behaves in case of an exception. In any case it is architecturally unwise to put lazy resources into the service class. If you want to share those resources with multiple services that's a problem. It's also not the responsibility of the service class to do that.
It all depends on amount of data to load and the pattern of data usage.
Assuming that your service calls are independent and may require different portions of data, then you may implement some caching (using Lazy<T> or similar techniques). But this solution has one important caveat: once data is loaded into the cache it will be there forever unless you define some expiration strategy (time-based or flush on write or something else). If you do not have cache entry expiration strategy your service will consume more and more memory over time.
This may not be too important problem, though, if amount of data you load from the database is small or majority of calls access same data again and again.
Another approach is to use WCF sessions (set InstanceContextMode to PerSession). This will ensure that you have service object created for lifetime of a session (which will be alive while particular WCF client is connected) - and all calls from that client will be dispatched to the same service object. It may or may not be appropriate from business domain point of view. And if this is appropriate, then you can load your data from the database on a first call and then subsequent calls within same session will be able to reuse the data. New session (another client or same client after reconnect) will have to load data again.
This question already has answers here:
Best Practice for WCF Service Proxy lifetime?
(4 answers)
Reuse of WCF service clients
(2 answers)
Closed 9 years ago.
I have a UI application in which I consume a WCF service like this
public MainWindow()
{
....
mServiceClient = new ServiceClient("EndPointTCP");
}
Should I create the client as a member of class and close the client along with exit of my application or Should I create a new client whenever its required in a method and close there itself.
It depends solely onwhat you want to achieve. There is no "best way to do it" since both ways are possible, good, and have different tradeoffs.
Holding the clientobject simply wastes resources. It also may leak context data between calls. You might have a bug that will cause mClient.Buy(100) and mClient.Sell(100) to work properly when used alone, but fail when used together mClient.Buy(100); mClient.Sell(100). Dropping and re-creating fresh instance each time could save you from that one bug, but obviously that's not a good argument for it.
Recreating the client each time a call is to be made has however the vague benefit of .. having a fresh client every time. If your app has a possibility to dynamically change the endpoint during the app's runtime, then automatically your client will be always using the newest addresses/logins/passwords/etc.
However, not recreating the client object at every time is simply faster. Mind that it's WCF layer, so the actual underlying connection can be any. Were it some protocol with heavy setup with some keyexchange, encryption, etc, you may find that creating a new client every time might create a new connection at every time and it will slow down eveyrthing, while keeping the instance will work blazingly fast, since connection might be kept open and reused. Often you try to keep the connection when you have to perform many and often calls to the service, like 24h/day twice/sec monitoring some remote value for breaching safe limits.
On yet the other hand, you might not want the connection to linger. Your remote service may have thousands of clients and limited resources, so you might want to close the connection ASAP so others may connect. Often you'd do it like that when the calls to the service are really rarely done, only once in a time, ie. when user clicks after returning from coffe break.
Please don't get me wrong: all above is just conjuring some vague "facts" from a void. I do not know your app, nor your service, nor your bindings (sans "endpoint TCP"). The most important factors are all on your side and they sit in the actual way how your app and that remote service work and interoperate. If you care about what you ask, you must first simply research the topic on your side. Best - simply try both ways and check if it works and how does it perform. The difference would be something like 2..6 lines of code, so that's, well, rather quick.
There are already some similar questions:
Reuse of WCF service clients
Reusing a WCF service client or creating one each time?
In my opinion it depends on your application type (scalability, performance requirements, ...) but usually I think that it's more safe to recreate the ServiceClient each time. In this way you don't need special code if there are connections problems between requests and with the latest version of WCF seems that there isn't a big performance impact.
See http://msdn.microsoft.com/en-us/library/aa738757.aspx.
Consider also that ServiceClient is not thread safe (at least reading MSDN).
I have a multi-threaded application which communicates with a server over a TCP connection. The application would be deployed as a windows service.
The way it has been implemeted is, there is Controller which creates Communicator objects, assigns the port number, message count etc. properties to the Communicator and invokes its StartClient method to commence the dialog with the server.
Within the StartClient method, each Communicator object creates a connection to the server, using the port number and url specified by the Controller. After establishing the connection, it internally creates a thread and calls the ReadMessages method which keeps reading from the server till the message count is met and then gets closed down.
Based on the runtime conditions, there might be a need to reuse the Communicator object to talk with the server again and hence, the ReadMessages method woudl be called again.
Initially, we had been calling Dispose() method for the NetworkStream, StreamReader and StreamWriter objects when the ReadMessages method completed, but with the reconnecting scenario, it used to throw "Cannot access a disposed object" error. So, we commented out the Dispose method call for testing.
As of now, it works fine, but I am concerned that, this isnt the best way to achieve this functionlity as I am not disposing the objects ever.
I was thinking in terms of object pooling, If it is possible to have a pool of Stream objects which could be reused by different threads?
One way to tackle this can be to create a new instance of Stream objects each time the Communicator connects with the server, but I think that would be an expensive operation.
Can you please help me identify a better approach to handle the situation here so that I can reuse the Communicator object without a performance hit?
The approach will be based on how frequently you need to read messages - if its occasional the n, I would recommend that you re-factor your communicator object to make "ReadMessages" operation atomic - i.e. it would connect to the server, create network stream, read messages and then dispose every thing.
Our connectivity to EMS code was initially ill-designed and created one TopicConnection object per topic that we listened to. So, in effect, whenever we subscribed to a topic, we create a new connection, a new session and, lastly, a new listener.
We would like to switch to a single connection model. Although I am able to do this easily in our code by sharing one connection object and creating a new session object per topic, we are unsure whether this is going to cause any issues without code.
My understanding is that the Tibco EMS client library is thread safe with regards to sharing a connection. In effect, a connection is just a pipe and the sessions can resuse the this pipe in a thread safe manner.
Is this assumption correct or is there more to this?
The .NET EMS API is based on JMS. In JMS, the Connection and Session objects are specified to be thread-safe and can be reused within the program. You are quite correct in that the Connection object simply represent a network pipe to the EMS server. The EMS User's Guide states:
A connection is a fairly heavyweight object, so most clients will create a connection once and keep it open until the client exits. Your application can create multiple connections, if necessary.
And regarding sessions:
A Session is a single-threaded context for producing or consuming messages. You create Message Producers or Message Consumers using Session objects.
Essentially, unless you need very large volumes and are bumping into performance limitations, it's perfectly safe to use just one connection in your application. The session controls the transaction/acknowledgement semantics of any producers or consumers created within, but is again safe to reuse. I would probably use separate sessions for modules exist within the application with distinct life cycles (think separate deployment units within an application server).
Your EMS server installation will contain a samples directory with various code (something like C:\tibco\ems\5.0\samples\cs). The code in csTopicSubscriber.cs shows how to write a single-threaded topic consumer. There is no multi-threaded topic consumer example but csMsgConsumerPerf.cs demonstrates how to do it with queues.
Be sure to clean up any objects you create after you're done with them - e.g. close the topic consumer object, the session, and the connection when you're finished. Leaking handles without closing them can result in unpredictable behaviour when combined with prefetch and fault-tolerant reconnect settings.
I think yes as long as sharing is within the same application (exe, binary).
We have shared same connection object, and used it as a singleton in our code.
Agree with an earlier answer: the JMS Session must not be shared between threads, but the Connection can/should be. So one connection per application is ok (make sure you start/close it only once - best before/after the individual threads creation).
And then create and use one Session per thread. Remember that when you close() a Session, it will block until all receive callbacks have really returned. So do NOT call close() from within a callback's onMessage().