Are .NET WSE client stubs thread-safe? - c#

Are client stubs generated from WSDL by .NET WSE thread-safe?
Of course, "thread-safe" isn't necessary a rigorously defined term, so I'm at least interested in the following:
Are different instances of the same stub class accessible concurrently by different threads, with the same effective behavior as single-threaded execution?
Is a single instance of the same stub class accessible concurrently by different threads, with the same effective behavior as the same calls interleaved in some arbitrary way in single-threaded execution?
You may also wish to use the terminology described here (and originating here) to discuss this more precisely.

Well, for the short answer of is it thread safe, is yes. The reason is that the server side of the service will have more to say then the client connection as to threading capabilities. The client is just a proxy that lays out the request in a fashion that the server can understand. It knows nothing. It is a basic class, no outside access other than the connection to a server. So as long as the server allows multiple connections you would be fine. Thus no resource contention (Except for the server being able to handle all your requests).
On the client side you can have multiple threads use the same class but different instances. This would probably be the preferred scenario so that each transaction can be atomic. Whereas the shared instance you would have to handle your own thread locking around the access of the class itself otherwise you may run into a race condition on the resource internal to your code.
There is also the ability to have a asynchronous call. The stubs generated by wsdl tool will create the begin, end invoke methods so that you can provide a callback method to effectively allow you to submit your request and continue your code without waiting for a reply. This would probably be the best for your second scenario with the single instance.
However it also depends on how the server component is coded. If it's a webservice you should be able to submit multiple requests simultaneously. However if it's a socket based service you may need to do some additional coding on your end in order to handle multiple incoming connections or even to create sockets for example.
So in short yes the different instances behave the same as single threaded execution within the limits of the server side being able to handle multiple concurrent connections.
As for the single instance if you use a callback process, which is provided you may be able to get what you are after without too much headache. However it is also restricted to the limits of the server side code.
The reason I state the server limits is that there are companies that will build webservices that restrict the number of connections coming from outbound hosts so your throughput is limited by this. Thus the number of effective threads you could use would be reduced or made obsolete.

Related

Why WCF service able to process more calls from different processes than from thread

Why would WCF services configured with instancing per call and multiple concurrency would perform differently when run with different process and totally differently when called from threads?
I have one application which does distribute data through number of threads and makes calls (don't think that locking occurs in code, will test that again) to WCF service. During test was noticed that increasing number of threads in distribution app does not increase overall performance of wcf processing service, average is about 800 mpm(messages per minute processed) so throughput does not really change BUT if you run second application then average throughput increases to ~1200 mpm.
What am i doing wrong? what have i missed? i can't understand this behavior.
UPDATE #1(answer to questions in comments)
Thanks for such quick responses.
Max connections is set to 1000 in config(yes in system.net).
Referring to this article wcf Instances and threading max calls should be 16 x number of cores, so i assume if called form ~30 threads on 2 cpu wcf service should accept mostly all of those thread calls?
Does it have anything to do with shared memory? because that's probably the only differences between multiple threads and processes, i think.
Don't have a opportunity to right now to test it with more cpu's or single. Will do when can.
So I think to understand this behavior, you first need to understand how WCF processes calls with per-call instancing. The hint is in the name - Per Call.
Every call any client makes is serviced by a new instance of the service (the exception to this is reentrancy, but this is not important in your scenario).
So, configuring service concurrency makes no practical difference to the service behavior. Regardless of whether the calls are coming from a single, multithreaded client, or multiple clients, the service will behave the same: it will create a service instance per call.
Therefore, the difference in overall system performance must be due to something on the client side. If I had to take a wild guess I would say that the one client is slower than two clients because of the cost associated with context switching which is mitigated (via an unidentified mechanism) by running the client in in two separate processes.
If I am correct then you should be able to get the highest performance per thread by running multiple single-threaded clients, which is a test you could do.
In this implementation of operation, below attribute should be added to class.
[ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall)]
public class MyService : IMyService
{
}
You can read more here:
http://wcftutorial.net/Per-Call-Service.aspx

WCF callbacks, proxy and thread-safety

Given a WCF duplex service (NetTcpBinding) that is configured to create a new service instance for each new client (see pattern publish-subscribe), you can get a specific instance of callback for each service instance. Since different instances are created, methods belonging to different callbacks can be invoked from different threads concurrently.
What happens if multiple threads try to invoke the same method on the same callback?
What happens if they try to invoke different methods but for the same callback?
Should we manage concurrent access to these methods from multiple threads? In both cases?
Consider now the client side that communicates with the service: to make sure that the client can use the service, you must instantiate a new proxy, and in order to invoke the methods defined in the service, you must invoke the corresponding methods of the proxy.
What happens if multiple threads try to invoke the same method on the same proxy instance?
What happens if they try to invoke different methods but for the same proxy instance?
Should we manage concurrent access to these methods from multiple threads? In both cases?
The answers to most of those questions depend on how you manage your service's concurrency. There is no definitive answer since it depends on what you set for your ConcurrencyMode and InstanceContextMode. WCF's concurrency management will enable you to fine tune your service's threading behavior and performance. A long and arduous (but very detailed) read on concurrency management is available on MSDN.
The InstanceContextMode allows you to define how your service should be instantiated. For a service performing a lot of heavy duty work and handling lots of calls, the general idea is to use PerCall instancing as with this setting incoming client requests will be processed on a seperate instance of the service each time.
ConcurrencyMode, the main player, will alow you to define how many threads can access a service instance at a given time. In ConcurrencyMode=Single, only one thread can access the service instance at a time. This also depends on whether you've enabled the SynchronizationConext, if SynchronizationConext=true then the client calls will be queued if your service is in the process of answering another request. So incoming service calls will be queued up until the preceding calls are dealt with first. With the ConcurrencyMode=Multiple setting, any number of threads are allowed access to a service instance, meaning your service can answer as many calls as possible given how many threads (directly related to CPU power) are available to it in the Thread pool. The catch with multiple concurrency mode is that your service many not be so reliable in the order in which it receives and responds to calls, since state will not be managed as the SynchronizationContext will be set to false by default. A nice and short summary on concurrency modes and thread safety is available on MSDN.
These settings will affect your service performance when used in conjunction with the InstanceContext mode, see this pretty nice article which explores various concurrency modes and instance context settings and their effects on performance (though it seems that the results are only in a self hosted environment, probably not too representative of the timings you would get when hosting in IIS).
The way you manage your service's concurrency will affect its performance greatly. Ideally you want to make available as many threads as possible (try increasing the ThreadPool's minimum threads) to your service, and avoid incoming services calls to be queued up as long as your service has computational resources at it's disposal. But excessive use of multithreading will sacrifice state management and the order in which you answer client requests.

How do I implement Redis pipelined requests with Booksleeve?

I'm a bit mixed up about the difference between a Redis transaction and pipeline and ultimately how to use pipelines with Booksleeve. I see that Booksleeve has support for the Redis transaction feature (MULTI/EXEC), but there is no mention in its API/tests about a pipelining feature. However, it's clear in other implementations that there is a distinction between pipelines and transactions, namely in atomicity, as evidenced in the redis-ruby version below, but in some places the terms seem to be used interchangeably.
redis-ruby implementation:
r.pipelined {
# these commands will be pipelined
r.get("insensitive_key")
}
r.multi {
# these commands will be executed atomically
r.set("sensitive_key")
}
I'd just use MULTI/EXEC instead but they seem to block all other users until the transaction has completed (not necessary in my case), so I worry about their performance. Has anyone used pipelines with Booksleeve or have any ideas about how to implement them?
In BookSleeve, everything is always pipelined. There are no synchronous operations. Not a single one. As such, every operation returns some form of Task (could be a vanilla Task, could be a Task<string>, Task<long>, etc), which at some point in the future (i.e. when redis responds) will have a value. You can use Wait at your calling code to perform a synchronous wait, or ContinueWith / await (C# 5 language feature) to perform an asynchronous callback.
Transactions are no different; they are pipelined. The only subtle change with transactions is that they are additionally buffered at the call-site until complete (since it is a multiplexer, we can't start pipelining transaction-related messages until we have a complete unit-of-work, as it would adversely impact other callers on the same multiplexer).
So: the reason there is no explicit .pipelined is that everything is pipelined and asynchronous.
Pipelining is a protocol level communication strategy and has nothing to do with atomicity. It is entirely orthogonal to notion of 'transactions'. (For example, you can use MULTI .. EXEC in a pipelined connection.)
What is pipelining?
The most basic connector to redis would be a synchronous client interacting in a request-reply manner. Client sends a request, and then waits for response from Redis before sending the next request.
In pipelining, the client can keep sending requests without pausing to see the Redis response for each request. Redis is, of course, a single threaded server and a natural serialization point, and thus request order is preserved and reflected in the response order. This means, the client can have one thread sending requests (typically by dequeuing from a request queue) and another thread is constantly processing responses from Redis. Note that of course you can still use pipelining with a single threaded client, but you do lose some of the efficiencies. The two threaded model allows for full utilization of your local CPU and the network bandwidth (e.g. saturation).
If you are following this so far, you must ask yourself: well, how are the request and responses matched on the client side? Good question! There are various ways to approach this. In JRedis, I wrap requests in a (java) Future object, to deal with the asynchrony of the request/response processing. Everytime a request is sent, a corresponding Future object is wrapped by a pending response object and is queued. The response listener simply dequeues from this queue 1 item at a time and parses the response (stream) and updates the future object.
Now the end user of the client can either be exposed to a synchronous or an asynchronous interface. If the interface is synchronous, the implementation naturally must block on the Future's response.
If you have followed so far, then it should be clear that a single threaded app using synchronous semantics with pipelining defeats the entire purpose of pipelining (since the app is blocking on the response and is not busy feeding the client additional requests.) But if the app is multithreaded, a synchronous interface to the pipeline allows you to use a single connection while processing N client-app threads. (So here, it is a implementation strategy to help build a thread-safe connection.)
If the interface to pipeline is asynchronous, then even a single threaded client app can benefit. Throughput increases at least by an order of magnitude.
(Caveats with pipelining: It is non-trivial to write a fault-tolerant pipelined client.)
Ideally I should use a diagram, but pay attention to what happens at the end of the clip:
http://www.youtube.com/watch?v=NeK5ZjtpO-M
Here is the link to Redis Transactions Documentation
Regarding BookSleeve, please refer to this post from Marc.
"CreateTransaction() creates a staging area to build commands (using exactly the same API) and capture future results. Then, when Execute() is called the buffered commands are assembled into a MULTI/EXEC unit and sent down in a contiguous block (the multiplexer will send all of these together, obviously)."
If you create your commands inside a transaction they will automatically be "pipelined".

Are concurrency issues possible when using the WCF Service Behavior attribute set to ConcurrencyMode.Multiple and InstanceContextMode.PerCall?

We have a WCF service that makes a good deal of transactional NHibernate calls. Occasionally we were seeing SQL timeouts, even though the calls were updating different rows and the tables were set to row level locking.
After digging into the logs, it looks like different threads were entering the same point in the code (our transaction using block), and an update was hanging on commit. It didn't make sense, though, because we believed that the following service class attribute was forcing a unique execution thread per service call:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.PerCall)]
We recently changed the concurrency mode to ConcurrencyMode.Single and haven't yet run into any issues, but the bug was very difficult to reproduce (if anyone has any thoughts on flushing a bug like that out, let me know!).
Anyway, that all brings me to my question: shouldn't an InstanceContextMode of PerCall enforce thread-safety within the service, even if the ConcurrencyMode is set to multiple? How would it be possible for two calls to be serviced by the same service instance?
Thanks!
The only way to have two different WCF clients, i.e., proxies, reference the same instance of your WCF service is to use InstanceContextMode=InstanceContextMode.Single. This is a poor choice if scaling is an issue, so you want to use PerCall if you can.
When you use PerCall, each CALL to the WCF service gets its own WCF service instance. There's no sharing of the service instance, but that doesn't mean that they don't share the same back-end storage (e.g., database, memory, file, etc.). Just remember, PerCall allows each call to access your WCF service simultaneously.
The ConcurrencyMode setting controls the threading model of the service itself. A setting of Single restricts all of the WCF service instances to running on the same thread. So if you have multiple clients connecting at the same time, they will only be executed one at a time on the WCF service side. In this case, you leverage WCF to provide synchronization. It'll work fine, as you have seen, but think of this as having only macro-level control over synchronization - each WCF service call will execute in its entirety before the next call can execute.
Setting ConcurrencyMode to Multiple, however, will allow all of the WCF service instances to execute simultaneously. In this case, you are responsible for providing the necessary synchronization. Think of this as having micro-level control over synchronization since you can synchronize only those portions of each call that need to be synchronized.
I hope I've explained this well enough, but here's a snippet of the MSDN documentation for ConcurrencyMode just in case:
Setting ConcurrencyMode to Single instructs the system to restrict
instances of the service to one thread
of execution at a time, which frees
you from dealing with threading
issues. A value of Multiple means that
service objects can be executed by
multiple threads at any one time. In
this case, you must ensure thread
safety.
EDIT
You asked
Is there any performance increase, then, using PerCall vs. Single when using ConcurrencyMode.Single? Or is the inverse true?
This will likely be service dependent.
With InstanceContextMode.PerCall, a new service instance is created for each and every call via the proxy, so you have the overhead of instance creation to deal with. Assuming your service constructor doesn't do much, this won't be a problem.
With InstanceContextMode.Single, only one service instance exists for the lifetime of the application, so there is practically no overhead associated with instance creation. However, this mode allows only one service instance to process every call that will ever be made. Thus, if you have multiple calls being made simultaneously, each call will have to wait for the other calls to finish before it can be executed.
For what it's worth, here's how I've done this. Use the PerCall instance context with Multiple concurrency. Inside your WCF service class, create static members to manage the back-end data store for you, and then synchronize access to these static members as necessary using the lock statement, volatile fields, etc. This allows your service to scale nicely while still maintaining thread safety.
I believe the answer is the fact that there are multiple threads (on the client side) utilizing the same proxy instance, thus potentially allowing for multiple calls into the same instance. This post has a more detailed explanation.
InstanceContextMode.PerCall and ConcurrencyMode.Single should be fine if you are not using two way callbacks on the server. In that case you will need to use ConcurrencyMode.Reentrant or callback will not be able to get access to locked service instance and a deadlock will occur.
Since its a per call service instance creation it is impossible for other threads or calls to get access to it. As stated in article mentioned in other answers article such combination can still be a problem if session is creatd on a binding level AND you are using the same service proxy object.
So if you don't use same proxy object or don't have a sessionful binding and dont use two way callbacks to client ( most likely they should be OneWay anyway) InstanceContextMode.PerCall and ConcurrencyMode.Single should be good.
I think its all depends on the requirement.
If we are going to call the same service so many times then better we can use
InstanceContextMode is Single and concurrencymode is multiple.

ASP.NET Threading: should I use the pool for DB and Emails actions?

I’m looking for the best way of using threads considering scalability and performance.
In my site I have two scenarios that need threading:
UI trigger: for example the user clicks a button, the server should read data from the DB and send some emails. Those actions take time and I don’t want the user request getting delayed. This scenario happens very frequently.
Background service: when the app starts it trigger a thread that run every 10 min, read from the DB and send emails.
The solutions I found:
A. Use thread pool - BeginInvoke:
This is what I use today for both scenarios.
It works fine, but it uses the same threads that serve the pages, so I think I may run into scalability issues, can this become a problem?
B. No use of the pool – ThreadStart:
I know starting a new thread takes more resources then using a thread pool.
Can this approach work better for my scenarios?
What is the best way to reuse the opened threads?
C. Custom thread pool:
Because my scenarios occurs frequently maybe the best way is to start a new thread pool?
Thanks.
I would personally put this into a different service. Make your UI action write to the database, and have a separate service which either polls the database or reacts to a trigger, and sends the emails at that point.
By separating it into a different service, you don't need to worry about AppDomain recycling etc - and you can put it on an entire different server if and when you want to. I think it'll give you a more flexible solution.
I do this kind of thing by calling a webservice, which then calls a method using a delegate asynchronously. The original webservice call returns a Guid to allow tracking of the processing.
For the first scenario use ASP.NET Asynchronous Pages. Async Pages are very good choice when it comes to scalability, because during async execution HTTP request thread is released and can be re-used.
I agree with Jon Skeet, that for second scenario you should use separate service - windows service is a good choice here.
Out of your three solutions, don't use BeginInvoke. As you said, it will have a negative impact on scalability.
Between the other two, if the tasks are truly background and the user isn't waiting for a response, then a single, permanent thread should do the job. A thread pool makes more sense when you have multiple tasks that should be executing in parallel.
However, keep in mind that web servers sometimes crash, AppPools recycle, etc. So if any of the queued work needs to be reliably executed, then moving it out of process is a probably a better idea (such as into a Windows Service). One way of doing that, which preserves the order of requests and maintains persistence, is to use Service Broker. You write the request to a Service Broker queue from your web tier (with an async request), and then read those messages from a service running on the same machine or a different one. You can also scale nicely that way by simply adding more instances of the service (or more threads in it).
In case it helps, I walk through using both a background thread and Service Broker in detail in my book, including code examples: Ultra-Fast ASP.NET.

Categories