I started using SmartThreadPool after i read it is recommended as a replacement
of thread in cases you want your threads to have their own pool.
I set the max threads number to 5 but still see in task manager that it uses 10-12 threads.
Is this problem familiar for some of you?
The threads used by the SmartThreadPool and the threads used by the whole application are different things. The thread pool is a collection of threads used to to some work, but a .net app will have multiple background threads (e.g. garbage collector) running at the same time, which is what you're seeing in the task manager.
Related
As far as I understand, .NET CLR creates a thread pool for each process. So each process has its own thread pool. And in every thread pool, there exist a certain number of threads available. It might be increasd or decreased as deemed necessary by the framework, but it starts with a predetermined number of threads for each process.
I wanted to find out the number of threads it will start with for a simple WPF application. When I used the System.Threading.ThreadPool.GetMaxThreads(out worker, out io) and System.Threading.ThreadPool.GetAvailableThreads(out worker, out io), I got the same result of 2047 worker threads and 1000 io threads. But I assume this can't be right, so this is not the right way to find the currently reserved threads in the thread pool.
So I looked at the thread count using Windows Task Manager and it showed 10 threads for the application. That seemed sensible and I came to the conclusion that the thread pool has 9 threads since one of the 10 is the main UI thread.
First of all, is my conclusion of 9 threads in thread pool correct? Second, what is the right way of querying it using c#?
I have a .NET application which I would expect to have 5 long-running threads operating including the main thread. I can see that indeed 4 threads are newed up across the codebase, and I believe there is no direct (e.g. work item queuing / tasks) or indirect (e.g. Timers) usage of the ThreadPool anywhere. At least none I can find.
Running the app under Performance Monitor shows that the number of recognized threads stays constant at 5 (as I would expect) but the number of physical threads fluctuates between 70 and 120 over the course of about an hour!
Does anyone know why there are so many unused (as far as I can tell) physical threads? And why this number fluctuates?
I can't find any documentation that would explain this behavior so my best guess is that the ThreadPool balances itself to accommodate changing environmental factors such as free memory and resource contention but the numbers here seem excessive.
Update
A senior support engineer at Microsoft confirmed that the physical thread counter in use definitely only reports threads for the current process, despite the odd wording in MSDN. If an answer suggests this is not the case it will need to point to a definitive source.
Both ThreadPools and the GC create threads. There is a normal (or "worker") thread pool and an IO threadpool. The normal threadpool will allocate new threads as it feels it needs to to keep the threadpool responsive. It should create one thread per CPU right away, and probably one thread per second after that up to the minimum # of threads. See ThreadPool.GetMinThreads for the minimum number of worker threads the worker thread pool will create. See ThreadPool.GetAvailableThreads for the number of "active" worker threads in the worker thread pool. If you have long-running threads using worker thread-pool threads, this will make it think the thread is in use and allocate another to service future requests.
There is also a maximum # of threads in the pool, so as threads recycle back to the pool the pool may kill some off to get back down to a # it decides is best.
There is also a finalizer thread.
There are likely others that are undocumented or are a result of a library you're using.
Update:
I think part of the problem is confusion over "recognized threads" and "physical threads" and "unused threads".
Recognized threads are documented as (emphasis mine)
These threads are associated with a corresponding managed thread object. The runtime does not create these threads, but they have run inside the runtime at least once.
Physical threads are documented as (emphasis mine)
native operating system threads created and owned by the common language runtime to act as underlying threads for managed thread objects
I'm guessing that the term "unused threads" by #JRoughan refers to "physical threads"--those that aren't "recognized". Which doesn't really mean they're unused, they're just not in the recognized counter. As the documentation points out, "physical threads" are created by the runtime, and I don't believe you can tell from either of those counters whether a thread is "used" or "unused"--depending on what #JRoughan means by "unused".
Things like this do not have a simple answer. You need to investigate either under a debugger or using ETW traces.
With ETW traces, you can get events for each thread creation/destruction, optionally with call stack.
CLR itself could create threads for itself (e.g. GC threads, background GC threads, multicore JIT thread), thread pool threads, IO threads, timer thread. There is another kind of thread: gate thread.
Normally you can tell usage from the symbolic name of thread proc once symbols are resolved.
For ETW analysis, use PerfView from Microsoft.
Is the application that you are testing in performance monitor a stantalone .net application or an application under IIS? If it is a stantalone application, probably you add some extra lib/code for using performace monitor. It mays create threads.
You can use Sysinternals' Process Explorer to watch threads in your process. You can see which method in which module started the threads.
We can only speculate of course. My own bet would be about in-process COM servers. Those, and their associated threads, may be created when you use classes that wrap COM interfaces, such as the ones for directory services or WMI for example. Since they're created by native code (even though it's wrapped within a dotnet code), they're not recognized as managed threads.
I'm working on a network-bound application, which is supposed to have a lot (hundreds, may be thousands) of parallel processes.
I'm looking for the best way to implement it.
When I tried setting
ThreadPool.SetMaxThreads(int.MaxValue, int.MaxValue);
and than creating 1000 threads and making those do stuff in parallel, application's execution became really jumpy.
I've heard somewhere that delegate.BeginInvoke is somehow better that new Thread(...), so I've tried it, and than opened the app in debugger, and what I've seen are parallel threads.
If I have to create lots and lots of threads, what is the best way to ensure that the application is going to run smoothly?
Have you tried the new await / async pattern in C# 5 / .NET 4.5?
I haven't got sources to hand about how this operates under the hood, but one of the most common use-cases of this new feature is waiting for IO bound stuff.
Threads are not lightweight objects. They are expensive to create and context switch to/from; hence the reason for the Thread Pool (pre-created and recycled). Most common solutions that involve networking or other IO ports utilise lower-level IO Completion Ports (there is a managed library here) to "wait" on a port, but where the thread can continue executing as normal.
BeginInvoke will utilise a Thread Pool thread, so it will be better than creating your own only if a thread is available. This approach, if used too heavily, can immediately result in thread starvation.
Setting such a high thread pool count is not going to work in the long run as threads are too heavy for what it appears you want to do.
Axum, a former Microsoft Research language, used to achieve massive parallelism that would have been suitable for this task. It operated similarly to Stackless Python or Erlang. Lots of concepts from Axum made their way into the parallelism drive into C# 5 and .NET 4.5.
Setting the ThreadPool.SetMaxThreads will only affect how many threads the thread pool has, and it won't make a difference regarding threads you create yourself with new Thread().
Go async (model, not keyword) as suggested by many.
You should follow the advice mentioned in the other answers and comments. As fsimonazzi says, creating new threads directly has nothing to do with the ThreadPool. For a quick test lower the max worker and completionPort threads and use the ThreadPool.QueueUserWorkItem method. The ThreadPool will decide what your system can handle, queue your tasks and resuse threads whenever it can.
If your tasks are not compute-bound then you should also utilize asynchronous I/O. You do not your worker threads to wait for I/O completion. You need those worker threads to return to the pool as quickly as possible and not block on I/O requests.
I have a C# Windows Service that starts up various objects (Class libraries). Each of these objects has its own "processing" logic that start up multiple long running processing threads by using the ThreadPool. I have one example, just like this:
System.Threading.ThreadPool.QueueUserWorkItem(new System.Threading.WaitCallback(WorkerThread_Processing));
This works great. My app works with no issues, and my threads work well.
Now, for regression testing, I am starting those same objects up, but from a C# Console app rather than a Windows Service. It calls the same exact code (because it is invoking the same objects), however the WorkerThread_Processing method delays for up to 20 seconds before starting.
I have gone in and switched from the ThreadPool to a Thread, and the issue goes away. What could be happening here? I know that I am not over the MaxThreads count (I am starting 20 threads max).
The ThreadPool is specifically not intended for long-running items (more specifically, you aren't even necessarily starting up new threads when you use the ThreadPool, as its purpose is to spread the tasks over a limited number of threads).
If your task is long running, you should either break it up into logical sections that are put on the ThreadPool (or use the new Task framework), or spin up your own Thread object.
As to why you're experiencing the delay, the MSDN Documentation for the ThreadPool class says the following:
As part of its thread management strategy, the thread pool delays before creating threads. Therefore, when a number of tasks are queued in a short period of time, there can be a significant delay before all the tasks are started.
You only know that the ThreadPool hasn't reached its maximum thread count, not how many threads (if any) it actually has sitting idle.
The thread pool's maximum number of threads value is the maximum number that it can create. It is not the maximum number that are already created. The thread pool has logic that prevents it from spinning up a whole bunch of threads instantly.
If you call ThreadPool.QueueUserWorkItem 10 times in quick succession, the thread pool will not create 10 threads immediately. It will start a thread, delay, start another, etc.
I seem to recall that the delay was 500 milliseconds, but I can't find the documentation to verify that.
Here it is: The Managed Thread Pool:
The thread pool has a built-in delay (half a second in the .NET
Framework version 2.0) before starting new idle threads. If your
application periodically starts many tasks in a short time, a small
increase in the number of idle threads can produce a significant
increase in throughput. Setting the number of idle threads too high
consumes system resources needlessly.
You can control the number of idle threads maintained by the thread
pool by using the GetMinThreads and SetMinThreads
Note that this quote is taken from the .NET 3.5 version of the documentation. The .NET 4.0 version does not mention a delay.
I need to optimize a WCF service... it's quite a complex thing. My problem this time has to do with tasks (Task Parallel Library, .NET 4.0). What happens is that I launch several tasks when the service is invoked (using Task.Factory.StartNew) and then wait for them to finish:
Task.WaitAll(task1, task2, task3, task4, task5, task6);
Ok... what I see, and don't like, is that on the first call (sometimes the first 2-3 calls, if made quickly one after another), the final task starts much later than the others (I am looking at a case where it started 0.5 seconds after the others). I tried calling
ThreadPool.SetMinThreads(12*Environment.ProcessorCount, 20);
at the beginning of my service, but it doesn't seem to help.
The tasks are all database-related: I'm reading from multiple databases and it has to take as little time as possible.
Any idea why the last task is taking so long? Is there something I can do about it?
Alternatively, should I use the thread pool directly? As it happens, in one case I'm looking at, one task had already ended before the last one started - I would had saved 0.2 seconds if I had reused that thread instead of waiting for a new one to be created. However, I can not be sure that that task will always end so quickly, so I can't put both requests in the same task.
[Edit] The OS is Windows Server 2003, so there should be no connection limit. Also, it is hosted in IIS - I don't know if I should create regular threads or using the thread pool - which is the preferred version?
[Edit] I've also tried using Task.Factory.StartNew(action, TaskCreationOptions.LongRunning); - it doesn't help, the last task still starts much later (around half a second later) than the rest.
[Edit] MSDN1 says:
The thread pool has a built-in delay
(half a second in the .NET Framework
version 2.0) before starting new idle
threads. If your application
periodically starts many tasks in a
short time, a small increase in the
number of idle threads can produce a
significant increase in throughput.
Setting the number of idle threads too
high consumes system resources
needlessly.
However, as I said, I'm already calling SetMinThreads and it doesn't help.
I have had problems myself with delays in thread startup when using the (.Net 4.0) Task-object. So for time-critical stuff I now use dedicated threads (... again, as that is what I was doing before .Net 4.0.)
The purpose of a thread pool is to avoid the operative system cost of starting and stopping threads. The threads are simply being reused. This is a common model found in for example internet servers. The advantage is that they can respond quicker.
I've written many applications where I implement my own threadpool by having dedicated threads picking up tasks from a task queue. Note however that this most often required locking that can cause delays/bottlenecks. This depends on your design; are the tasks small then there would be a lot of locking and it might be faster to trade some CPU in for less locking: http://www.boyet.com/Articles/LockfreeStack.html
SmartThreadPool is a replacement/extension of the .Net thread pool. As you can see in this link it has a nice GUI to do some testing: http://www.codeproject.com/KB/threads/smartthreadpool.aspx
In the end it depends on what you need, but for high performance I recommend implementing your own thread pool. If you experience a lot of thread idling then it could be beneficial to increase the number of threads (beyond the recommended cpucount*2). This is actually how HyperThreading works inside the CPU - using "idle" time while doing operations to do other operations.
Note that .Net has a built-in limit of 25 threads per process (ie. for all WCF-calls you receive simultaneously). This limit is independent and overrides the ThreadPool setting. It can be increased, but it requires some magic: http://www.csharpfriends.com/Articles/getArticle.aspx?articleID=201
Following from my prior question (yep, should have been a Q against original message - apologies):
Why do you feel that creating 12 threads for each processor core in your machine will in some way speed-up your server's ability to create worker threads? All you're doing is slowing your server down!
As per MSDN do
As per the MSDN docs: "You can use the SetMinThreads method to increase the minimum number of threads. However, unnecessarily increasing these values can cause performance problems. If too many tasks start at the same time, all of them might appear to be slow. In most cases, the thread pool will perform better with its own algorith for allocating threads. Reducing the minimum to less than the number of processors can also hurt performance.".
Issues like this are usually caused by bumping into limits or contention on a shared resource.
In your case, I am guessing that your last task(s) is/are blocking while they wait for a connection to the DB server to come available or for the DB to respond. Remember - if your invocation kicks off 5-6 other tasks then your machine is going to have to create and open numerous DB connections and is going to kick the DB with, potentially, a lot of work. If your WCF server and/or your DB server are cold, then your first few invocations are going to be slower until the machine's caches etc., are populated.
Have you tried adding a little tracing/logging using the stopwatch to time how long it takes for your tasks to connect to the DB server and then execute their operations?
You may find that reducing the number of concurrent tasks you kick off actually speeds things up. Try spawning 3 tasks at a time, waiting for them to complete and then spawn the next 3.
When you call Task.Factory.StartNew, it uses a TaskScheduler to map those tasks into actual work items.
In your case, it sounds like one of your Tasks is delaying occasionally while the OS spins up a new Thread for the work item. You could, potentially, build a custom TaskScheduler which already contained six threads in a wait state, and explicitly used them for these six tasks. This would allow you to have complete control over how those initial tasks were created and started.
That being said, I suspect there is something else at play here... You mentioned that using TaskCreationOptions.LongRunning demonstrates the same behavior. This suggests that there is some other factor at play causing this half second delay. The reason I suspect this is due to the nature of TaskCreationOptions.LongRunning - when using the default TaskScheduler (LongRunning is a hint used by the TaskScheduler class), starting a task with TaskCreationOptions.LongRunning actually creates an entirely new (non-ThreadPool) thread for that Task. If creating 6 tasks, all with TaskCreationOptions.LongRunning, demonstrates the same behavior, you've pretty much guaranteed that the problem is NOT the default TaskScheduler, since this is going to always spin up 6 threads manually.
I'd recommend running your code through a performance profiler, and potentially the Concurrency Visualizer in VS 2010. This should help you determine exactly what is causing the half second delay.
What is the OS? If you are not running the server versions of windows, there is a connection limit. Your many threads are probably being serialized because of the connection limit.
Also, I have not used the task parallel library yet, but my limited experience is that new threads are cheap to make in the context of networking.
These articles might explain the problem you're having:
http://blogs.msdn.com/b/wenlong/archive/2010/02/11/why-are-wcf-responses-slow-and-setminthreads-does-not-work.aspx
http://blogs.msdn.com/b/wenlong/archive/2010/02/11/why-does-wcf-become-slow-after-being-idle-for-15-seconds.aspx
seeing as you're using .Net 4, the first article probably doesn't apply, but as the second article points out the ThreadPool terminates idle threads after 15 seconds which might explain the problem you're having and offers a simple (though a little hacky) solution to get around it.
Whether or not you should be using the ThreadPool directly wouldn't make any difference as I suspect the task library is using it for you underneath anyway.
One third-party library we have been using for a while might help you here - Smart Thread Pool. You still get the same benefits of using the task libraries, in that you can have the return values from the threads and get any exception information from them too.
Also, you can instantiate threadpools so that when you have multiple places each needing a threadpool (so that a low priority process doesn't start eating into the quota of some high priority process) and oh yeah you can set the priority of the threads in the pool too which you can't do with the standard ThreadPool where all the threads are background threads.
You can find plenty of info on the codeplex page, I've also got a post which highlights some of the key differences:
http://theburningmonk.com/2010/03/threading-introducing-smartthreadpool/
Just on a side note, for tasks like the one you've mentioned, which might take some time to return, you probably shouldn't be using the threadpool anyway. It's recommended that we should avoid using the threadpool for any blocking tasks like that because it hogs up the threadpool which is used by all sorts of things by the framework classes, like handling timer events, etc. etc. (not to mention handling incoming WCF requests!). I feel like I'm spamming here but here's some of the info I've gathered around the use of the threadpool and some useful links at the bottom:
http://theburningmonk.com/2010/03/threading-using-the-threadpool-vs-creating-your-own-threads/
well, hope this helps!