Sharing objects between threads in C# and WCF - c#

I have a server which exposes a SOAP WCF service endpoint. This server also uses a group communication framework called Ensemble (not really relevant to the question) in order to communicate with other servers in the same cluster.
I need to share objects/data between the seperate thread which listens for incoming messages from other servers and the threads that run the WCF routines when they are invoked. So far, I did the most simple thing I could think of - I created a static "database" class with static members and static methods - and used lock() to sync. This way I could access this class from both the server and the group communication thread. My problem with this is that it kinda breaks the whole "OOP thing" and I think something more clever can be done here...

If the only issue that you have with your solution is its alleged "non-OOP-edness", you could go for the Singleton Pattern instead. This is a widely used pattern for situations when you must have a single instance of a class that needs to be shared among multiple parts of the system that are otherwise disconnected. The pattern remains somewhat controversial, because some regard it as a glorified version of a global variable, but it is efficient at getting the job done.

Encapsulate the seperate thread which listens for incoming messages from other servers into a Class say MyCustomService.
Write the WCF service Implementation class with behaviour as concurrencyMode multiple and InstanceContextMode Single
Write a event delagate inside the WCF service implementation class. The delegate will return type of the MyCustomService class.
When you instantiate the WCF service programmatically, (host.Open), before that set the delegate to a function that will return the MyCustomService instance which can be singleton or static.
From the service instance class you can always call the delegate to get the MyCustomService instance. Check for null though.

Related

c# WebService kills Singleton it owns

I have a WebService, which owns a Singleton:
public class WebService
{
private static Singleton _singleton = Singleton.Instance;
public void DoSomeJob(object jobObj) {
_singleton.QueueJob(jobObj);
}
}
.. and the Singleton, which should be threadsafe.
public static Singleton Instance
{
get
{
lock (_syncRoot)
{
if (_instance == null)
_instance = new Singleton();
return _instance;
}
}
}
}
What I was going to achieve this way is, that every client calling my WebService gives its object to the same instance of the singleton. This singleton again, does not really do more than queueing the object and processing it when a timer ticks.
The problem I was facing (and still am), is that the Singleton is getting killed every time the WebService terminates. However, I am not sure if this is happening because the owner of the Singleton is being destructed or for some reason given by the app pool settings.
I have tried to make the app pool "always running" and "suspending" when idle, instead of "on demand" and "terminate" - no success :-/
Why is the Singleton getting killed off each time? How can I keep the Singleton's instance alive between WebService executions?
Why is the Singleton getting killed off each time?
You need to understand about how WCF manages service instancing to understand why this is. By default WCF will create a new service instance per client over a session-enabled binding, or per call if no session is supported.
This means that the service instance which is dispached to handle a client call will load an instance of your singleton into memory. However, when either the the client session, or individual call (where no session is supported) has finished, the instance is unloaded, which means your singleton will also get unloded.
How can I keep the Singleton's instance alive between WebService
executions?
There are two ways to do this:
Get rid of your singleton. Use a backing data store to maintain your state across mutliple clients calls.
Use a singleton service instance, by setting InstanceContextMode=InstanceContxtMode.Single in your service implementation declaration.
Of the two options I would go with option 1. This is because singleton service instances are generally an anti-pattern because they do not scale, and should only be used when there is no alternative.
....considered to implement the queueing functionality to an external
component, e.g. a windows service, but for the purpose of simplicity
and reduced complexity I would like to implement that within the
WebService
OK, right there is where I think the source of your problem is. There is a common belief around ditributed systems, which can be stated as the following:
Simple = fewer components, and
Complex = more components
I would modify that belief to:
Simple = simple components, and
Complex = complex components
In my opinion your decision to embed your timer/queueing requirement into your web service automatically makes your component complex.
I think breaking out the component which reads from the queue into another component is exactly what you need to do!
If this is daunting to you, then I would very strongly recommend using topshelf to manage your windows service, which is a free framework which makes the creation and deployment of services very simple.

WCF callbacks, proxy and thread-safety

Given a WCF duplex service (NetTcpBinding) that is configured to create a new service instance for each new client (see pattern publish-subscribe), you can get a specific instance of callback for each service instance. Since different instances are created, methods belonging to different callbacks can be invoked from different threads concurrently.
What happens if multiple threads try to invoke the same method on the same callback?
What happens if they try to invoke different methods but for the same callback?
Should we manage concurrent access to these methods from multiple threads? In both cases?
Consider now the client side that communicates with the service: to make sure that the client can use the service, you must instantiate a new proxy, and in order to invoke the methods defined in the service, you must invoke the corresponding methods of the proxy.
What happens if multiple threads try to invoke the same method on the same proxy instance?
What happens if they try to invoke different methods but for the same proxy instance?
Should we manage concurrent access to these methods from multiple threads? In both cases?
The answers to most of those questions depend on how you manage your service's concurrency. There is no definitive answer since it depends on what you set for your ConcurrencyMode and InstanceContextMode. WCF's concurrency management will enable you to fine tune your service's threading behavior and performance. A long and arduous (but very detailed) read on concurrency management is available on MSDN.
The InstanceContextMode allows you to define how your service should be instantiated. For a service performing a lot of heavy duty work and handling lots of calls, the general idea is to use PerCall instancing as with this setting incoming client requests will be processed on a seperate instance of the service each time.
ConcurrencyMode, the main player, will alow you to define how many threads can access a service instance at a given time. In ConcurrencyMode=Single, only one thread can access the service instance at a time. This also depends on whether you've enabled the SynchronizationConext, if SynchronizationConext=true then the client calls will be queued if your service is in the process of answering another request. So incoming service calls will be queued up until the preceding calls are dealt with first. With the ConcurrencyMode=Multiple setting, any number of threads are allowed access to a service instance, meaning your service can answer as many calls as possible given how many threads (directly related to CPU power) are available to it in the Thread pool. The catch with multiple concurrency mode is that your service many not be so reliable in the order in which it receives and responds to calls, since state will not be managed as the SynchronizationContext will be set to false by default. A nice and short summary on concurrency modes and thread safety is available on MSDN.
These settings will affect your service performance when used in conjunction with the InstanceContext mode, see this pretty nice article which explores various concurrency modes and instance context settings and their effects on performance (though it seems that the results are only in a self hosted environment, probably not too representative of the timings you would get when hosting in IIS).
The way you manage your service's concurrency will affect its performance greatly. Ideally you want to make available as many threads as possible (try increasing the ThreadPool's minimum threads) to your service, and avoid incoming services calls to be queued up as long as your service has computational resources at it's disposal. But excessive use of multithreading will sacrifice state management and the order in which you answer client requests.

how it looks when I multiple times ask to wcf service at ones

When I expose wcf service and many clients ask to it's methods how it is working? I mean, like threading or queue or ...? It could be problem for that simple expose service to operate many ask at one? or for that job I should implement threading in wcf service?
What happens if I have query to database in service and two client execute it? Transaction on database side will operate this or I should use lock on query in service?
See also this MSDN page http://msdn.microsoft.com/en-us/library/ms731193.aspx on Instancing, along with Sessions and Concurrency.
All three concepts have some overlap, but you would first want to look at InstanceContextMode values, PerCall, PerSession, and Single. Then look at ConcurrencyMode values, Single, Multiple, and Reentrant. Basically, these ServiceBehaviorAttributes allow you to control how many instances of your service can exist, and how threads can concurrently access your service (from client connections).

Are concurrency issues possible when using the WCF Service Behavior attribute set to ConcurrencyMode.Multiple and InstanceContextMode.PerCall?

We have a WCF service that makes a good deal of transactional NHibernate calls. Occasionally we were seeing SQL timeouts, even though the calls were updating different rows and the tables were set to row level locking.
After digging into the logs, it looks like different threads were entering the same point in the code (our transaction using block), and an update was hanging on commit. It didn't make sense, though, because we believed that the following service class attribute was forcing a unique execution thread per service call:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.PerCall)]
We recently changed the concurrency mode to ConcurrencyMode.Single and haven't yet run into any issues, but the bug was very difficult to reproduce (if anyone has any thoughts on flushing a bug like that out, let me know!).
Anyway, that all brings me to my question: shouldn't an InstanceContextMode of PerCall enforce thread-safety within the service, even if the ConcurrencyMode is set to multiple? How would it be possible for two calls to be serviced by the same service instance?
Thanks!
The only way to have two different WCF clients, i.e., proxies, reference the same instance of your WCF service is to use InstanceContextMode=InstanceContextMode.Single. This is a poor choice if scaling is an issue, so you want to use PerCall if you can.
When you use PerCall, each CALL to the WCF service gets its own WCF service instance. There's no sharing of the service instance, but that doesn't mean that they don't share the same back-end storage (e.g., database, memory, file, etc.). Just remember, PerCall allows each call to access your WCF service simultaneously.
The ConcurrencyMode setting controls the threading model of the service itself. A setting of Single restricts all of the WCF service instances to running on the same thread. So if you have multiple clients connecting at the same time, they will only be executed one at a time on the WCF service side. In this case, you leverage WCF to provide synchronization. It'll work fine, as you have seen, but think of this as having only macro-level control over synchronization - each WCF service call will execute in its entirety before the next call can execute.
Setting ConcurrencyMode to Multiple, however, will allow all of the WCF service instances to execute simultaneously. In this case, you are responsible for providing the necessary synchronization. Think of this as having micro-level control over synchronization since you can synchronize only those portions of each call that need to be synchronized.
I hope I've explained this well enough, but here's a snippet of the MSDN documentation for ConcurrencyMode just in case:
Setting ConcurrencyMode to Single instructs the system to restrict
instances of the service to one thread
of execution at a time, which frees
you from dealing with threading
issues. A value of Multiple means that
service objects can be executed by
multiple threads at any one time. In
this case, you must ensure thread
safety.
EDIT
You asked
Is there any performance increase, then, using PerCall vs. Single when using ConcurrencyMode.Single? Or is the inverse true?
This will likely be service dependent.
With InstanceContextMode.PerCall, a new service instance is created for each and every call via the proxy, so you have the overhead of instance creation to deal with. Assuming your service constructor doesn't do much, this won't be a problem.
With InstanceContextMode.Single, only one service instance exists for the lifetime of the application, so there is practically no overhead associated with instance creation. However, this mode allows only one service instance to process every call that will ever be made. Thus, if you have multiple calls being made simultaneously, each call will have to wait for the other calls to finish before it can be executed.
For what it's worth, here's how I've done this. Use the PerCall instance context with Multiple concurrency. Inside your WCF service class, create static members to manage the back-end data store for you, and then synchronize access to these static members as necessary using the lock statement, volatile fields, etc. This allows your service to scale nicely while still maintaining thread safety.
I believe the answer is the fact that there are multiple threads (on the client side) utilizing the same proxy instance, thus potentially allowing for multiple calls into the same instance. This post has a more detailed explanation.
InstanceContextMode.PerCall and ConcurrencyMode.Single should be fine if you are not using two way callbacks on the server. In that case you will need to use ConcurrencyMode.Reentrant or callback will not be able to get access to locked service instance and a deadlock will occur.
Since its a per call service instance creation it is impossible for other threads or calls to get access to it. As stated in article mentioned in other answers article such combination can still be a problem if session is creatd on a binding level AND you are using the same service proxy object.
So if you don't use same proxy object or don't have a sessionful binding and dont use two way callbacks to client ( most likely they should be OneWay anyway) InstanceContextMode.PerCall and ConcurrencyMode.Single should be good.
I think its all depends on the requirement.
If we are going to call the same service so many times then better we can use
InstanceContextMode is Single and concurrencymode is multiple.

Are .NET WSE client stubs thread-safe?

Are client stubs generated from WSDL by .NET WSE thread-safe?
Of course, "thread-safe" isn't necessary a rigorously defined term, so I'm at least interested in the following:
Are different instances of the same stub class accessible concurrently by different threads, with the same effective behavior as single-threaded execution?
Is a single instance of the same stub class accessible concurrently by different threads, with the same effective behavior as the same calls interleaved in some arbitrary way in single-threaded execution?
You may also wish to use the terminology described here (and originating here) to discuss this more precisely.
Well, for the short answer of is it thread safe, is yes. The reason is that the server side of the service will have more to say then the client connection as to threading capabilities. The client is just a proxy that lays out the request in a fashion that the server can understand. It knows nothing. It is a basic class, no outside access other than the connection to a server. So as long as the server allows multiple connections you would be fine. Thus no resource contention (Except for the server being able to handle all your requests).
On the client side you can have multiple threads use the same class but different instances. This would probably be the preferred scenario so that each transaction can be atomic. Whereas the shared instance you would have to handle your own thread locking around the access of the class itself otherwise you may run into a race condition on the resource internal to your code.
There is also the ability to have a asynchronous call. The stubs generated by wsdl tool will create the begin, end invoke methods so that you can provide a callback method to effectively allow you to submit your request and continue your code without waiting for a reply. This would probably be the best for your second scenario with the single instance.
However it also depends on how the server component is coded. If it's a webservice you should be able to submit multiple requests simultaneously. However if it's a socket based service you may need to do some additional coding on your end in order to handle multiple incoming connections or even to create sockets for example.
So in short yes the different instances behave the same as single threaded execution within the limits of the server side being able to handle multiple concurrent connections.
As for the single instance if you use a callback process, which is provided you may be able to get what you are after without too much headache. However it is also restricted to the limits of the server side code.
The reason I state the server limits is that there are companies that will build webservices that restrict the number of connections coming from outbound hosts so your throughput is limited by this. Thus the number of effective threads you could use would be reduced or made obsolete.

Categories