This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Will Multi threading increase the speed of the calculation on Single Processor
I am reposting my question on Multithreading on Single core processor.
Original question is:
Will Multi threading increase the speed of the calculation on Single Processor
I have been asked a question, At any given time, only one thread is allowed to run on a single core. If so, why people use multithreading in a application. Lets say you are running console application and It is very much possible to write the application to run on the main thread. But still people go for multithreading.
It may not be faster for "pure" CPU work, but many tasks involve things that are not on the CPU (e.g. accessing file systems, networks, interacting with the user, etc). Even on a single core system, using multiple threads allows you to have one thread waiting on a file system access, one waiting on a network operation, one waiting for the user to respond and so on.
So while using multiple threads won't make a CPU-intensive process faster, it can make your application more responsive (that is, it can respond to user interaction "faster" because it's not blocked waiting for a network operation to complete, say).
Note that technically asynchronous operations will be even faster than using multiple blocking threads (because you don't have the overhead of context switching), but the multiple blocking threads paradigm is usually simpler to understand than asynchronous programming.
Related
This question already has answers here:
What is the difference between task and thread?
(8 answers)
Task vs Thread differences [duplicate]
(3 answers)
Closed 4 years ago.
Referring to the following post, which leads one to believe a .NET Task executes without native OS threads being involved. Is this true?
Difference between Task (System.Threading.Task) and Thread
EDIT
In reviewing duplicate questions I couldn't find an answer directly addressing the question that instantiating a .NET Task class will ultimately execute on a native OS thread. They refer to threads but either not discerning between managed and native OS threads or just to managed threads. The only thing that could be duplicate is my own answer to one of those questions.
But in digging myself, it would seem there is no "magic" with .NET that avoids native OS threads. There are no changes to the Windows kernel to allow this. Such is consistent with my own OS experience a couple of decades ago. In short, there is no app code anyone can write that does not run on a native Windows OS thread.
About Processes and Threads
Managed Threading
Also:
Windows Kernel Internals Process Architecture
Architecture of the Windows Kernel
Evolution of the Windows Kernel Architecture
The answer is: it depends.
Tasks which involve some computational work, would run on a thread and normally that would be a thread from the thread pool.
Long-running tasks, i.e. created with option TaskCreationOptions.LongRunning run on a dedicated thread, which is created for them.
I/O Tasks, like await stream.ReadAsync() do not have a thread at all. The operation is sent to the IO device, and the CPU is free to do whatever it pleases. Only when the device is ready with the requested data, it interrupts the CPU, it does some low-level processing and ultimately the OS gets a thread from the thread-pool to complete the task and make the result available to your program. More details here.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
When creating a MultiClient Server in C#, I can think of several ways to divide the work between the threads.
For this question, assume that the server accepts incoming TCP connections from the clients, and each client sends a file for the server, to store on the HardDrive.
Work Division 1:
Thread per Client:
The server will instantiate a new Thread per each new client that connects,
and that thread will be responsible for that client.
(those threads, are in addition to 1 "server thread")
Work Division 2:
Thread per Resource:
There will be 1 thread for handling the communication,
and 1 thread for writing to the harddrive.
A client object will be passed between each of these Resource-Responsible threads, and each Resource-Responsible thread will have its own queue so it can know what it should do.
(and again, those threads are in addition to 1 "server thread")
Work Division 3:
Thread per Sub-Task of the Main Task:
Let's call the work that we need to do per each connecting client, as "the main task".
So we'll break this main tasks into several Sub-Tasks,
and create a thread for each sub-task,
and again each thread will have a queue that will hold the client items that it should process.
(this sounds similar to division 2, but in another project with the different work rather than a File Receiving Server, this type of division might be quite different from division 2)
My question is:
Are there other ways that are recommended for dividing the work between the threads?
The answer is that you do not work with threads at all unless you have few connecting clients. The reason is that threading comes with an overhead and that the threads will be idle a large part of it's time since you work with slow resources (IO).
Instead you should look at asynchronous programming. In dotnet you have three models:
APM (Asynchronous programming model)
Event-based Asynchronous Pattern (EAP)
Task-based Asynchronous Pattern (TAP)
https://msdn.microsoft.com/en-us/library/jj152938(v=vs.110).aspx
APM is the oldest one. I only recommend it if your dotnet version doesn't support EAP or TAP. But in your case, you need to use APM (.NET 2.0) and can read more about it here: https://msdn.microsoft.com/en-us/library/ms228963(v=vs.110).aspx
When you've using async programming you do not have to worry about threads anymore. .NET and the OS will manage the threads. Your application will "awake" when something have completed in the IO operations that you've ordered (like sending something over a socket or read from a database).
I would like to implement, whichever solution chosen, using the Thread class, and not using other classes/tools from the .NET framework.
Only #1 is viable. Writing to disk will be faster than receiving the file over the network. So there is really no reason to let the resources own the threads.
Using #1 will also reduce the complexity and make the code easier to read. You will still need a service to make sure that two clients doesn't work with the same file.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Joe Albahari provides a great explanation of the .NET Thread Pool's automatic thread management, and why it works the way it does, in his Threading in C# e-book.
From what I understand, by default, after occupying all of a processor's cores, the thread pool delays the creation of new threads, because if all processor cores are busy doing computations, creating new threads can no longer improve the overall throughput (tasks completed per second) of the application, and new threads are just a waste of system resources.
However, if a task sits in the thread pool queue for too long, the thread pool assumes that a pooled thread is idling or blocked in some way, and tries to take advantage of the downtime by running the task concurrently.
Rather than this "delay" algorithm, wouldn't it make more sense, in many situations, to employ a technique whereby thread pool threads have a special property that signals a "waiting" state? It might look something like this:
System.Threading.Thread.CurrentThread.IsWaiting = true;
The thread pool would create new threads instantly for queued tasks until all processor cores are occupied with non-waiting threads. Then, tasks are held in the queue until a thread either finishes, OR signals a waiting state.
This would have a couple benefits. First, if a processor core is idle, tasks are always started the instant they are queued to the pool, with no delay. Second, in an application that runs a lot of computationally-intensive tasks that take more than half a second to complete, the thread pool won't continue to burden the system with unnecessary extra threads.
Of course, there may be some situations in which an application needs to finish tasks within a strict deadline, and can't wait for other tasks to finish first. This algorithm may not work for those applications. Otherwise, I imagine that it will only improve efficiency of multithreaded applications.
What do you think?
We have this information available in the Thread.ThreadState property. But it would not be a good idea for a thread pool to use this information. To use it, we would need communication between threads (the ones in the thread pool, and another gathering the information). That would mean some needs of synchronization, or at least volatile access. Both is really expensive. Thus, we would give a runtime burden to all applications of the ThreadPool, whereas only a few would benefit.
As a programmer, you have to reflect how your thread pool is used. If the standard behavior is not suitable for you, you can tweak the pool. E.g. using ThreadPool.SetMinThreads, if you know you have a lot of waiting threads. It would not be as automatic as you wish. But your automatization would also not be perfect, since we could have easily too many threads running, when some of the waiting threads wake up simultaneously.
Note, that other thread pools do not have the quite clever extension heurisitc at all, which is build into the C# variant. Normally, you have a fixed number of running threads, and you will never have more than this number running.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a process that connects to a host and infinitely checks if there is new data to process.
My application has close to 500 threads and each thread runs in an infinite loop..
Here's the code :
for(i=1; i<=500; i++)
{
Thread instanceCaller = new Thread(new ThreadStart(infiniteProcess));
instanceCaller.Start();
}
Is there a better way to write this code using C# async. Also, will there be any performance improvements if we use async instead of threadstart.
I want to clarify why I would like to create 500 threads and having a thread pool doesn't work for me.
Basically, each of this thread opens a dedicated socket connection to the host. The host sends a message to this socket which is then routed to the appropriate destination (configured in the DB). The destination could be a hardware device (printer etc., ) or some other device.
We cannot create thread pools because, each of this thread is very active and continuously receives messages from the host and processes them. The overhead of loading and unloading threads from the thread pool is inefficient.
My original application created using threads works well.. But I would like to see if there is any way we can improve the efficiency by taking advantage of new features in C# 5.0.
When you get into the hundreds of threads you should consider replacing blocking by asynchronous IO and asynchronous waiting. Many threads cause high memory usage and OS scheduling overhead.
Try to remove the calls that block the longest first. Apply the 80-20 principle. You don't have to go all async.
That said the overheads associated with this many threads are generally overestimated. Your code will not suddenly become 2x faster if you go all async. Async really only changes the way an operation is started and ended. The operation itself (the IO or the wait) is not accelerated at all.
Also, async does not add capacity. Your CPUs don't become faster, the database cannot handle more operations and the network has a fixed throughput as well. Async IO is really about saving memory and scheduling overhead.
You will not hit OS limits with 500 threads. I recently created 100,000 threads on my machine using testlimits.exe without much trouble. The limits are really high.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 8 years ago.
Improve this question
I am developing a WPF program which performs long running tasks in the background, using TPL's Parallel.ForEach.
I have an image control and when am trying to show an image, it seems that the image rendering happens in the thread pool, and since I use it intensively (via TPL Forech) the rendering is extremely slow.
Is there a way to perform the image rendering in higher priority or not in the thread pool ?
Parallel.ForEach is designed to use as much of your CPU as it can get hold of. If you don't want it to do that, then either don't use it (just run the long-running task in a single poll thread), or control the amount of your CPU which 'Parallel.ForEach' uses, by passing in a 'ParallelOptions' object to limit its appetite for CPU.
You also say you're running 'long running' tasks using Parallel.ForEach - bear in mind that it's designed for getting cpu-bound tasks finished quickly, not backgrounding things which are long-running because they're waiting for I/O (for example). From the symptoms though, it does sound like you're using it for the right thing, it's just that it's using more CPU than you want.
As far as trying to avoid the thread pool, I think you're barking up the wrong tree - it's just a collection of threads which already exist, designed to avoid the overhead of creating and destroying threads all the time - it's not a priority mechanism - what your two activities are fighting over is access to the CPU, not access to the thread pool.