Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
A CPU bound problem is one that requires a CPU for calculations.
An IO bound problem is one that requires to wait for the network, disk or input.
A single API request is IO bound.
The question is, when I make 100 API requests using a for loop, then, do we say that those requests are IO bound? Or do we say that they are CPU bound? Or do we say they are both CPU and IO bound?
Generally for IO bound we use multi threading or if we use single thread we can use async/await. Where as for a CPU bound process we use parallel programming or multi processing or async await with Task.Run.
For my example of 100 API requests in a for loop, is async/await better than multi threading or async/await+Task.Run or TPL?
If you have 100 I/O-bound operations, then the 100 operations as a whole are still I/O-bound.
CPU-bound is reserved for things that take a non-trivial amount of CPU time. Yes, technically incrementing a counter and starting the next I/O operation does execute CPU opcodes, but the loop would not be considered "CPU-bound" because the amount of time spent doing I/O is vastly higher than the amount of time doing CPU work.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 months ago.
Improve this question
Reading about how does the OS handle the executables, I couldn't figure out if is better to use one executable with many threads inside or use many independent executables. The same task is performed, but I need to process many requests. Is there an executable limits to run simultaneous threds? Does it matter or every OS task goes to same CPU queue and doesnt matter the source executable?
Which is better? One with many threads or many executables? If could give an explanation or share some doc I would be greateful.
As far as the OS scheduling them to CPUs, there's no difference; modern mainstream OSes (Windows / Linux / MacOS) use a 1:1 thread model so every user-space thread is a separately schedulable OS task, not "green threads".
Solaris did or does have an N:M thread model, where user-space threads can be scheduled onto a "thread pool" of OS-level threads, with user-space context switching in some cases, but most other OSes don't.
So either way can take full advantage of all the CPU cores in the system; which is better depends on the use-case. Threads of a single process share memory (and file descriptors) with each other, and are cheaper to create than new processes. But still not that cheap; often you want to have a pool of worker threads that you wake up or queue work for, not start a new thread or process when a new request comes in.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
My C# Brute Force programm uses 20% of CPU in runtime. Im only using 1 Backgroundworker for it. If I would spread this task into several parts and insert them in 7-8 Backgroundworkers, would that be faster and use more CPU?
The short answer is maybe.
The long answer is that it depends upon multiple factors:
Is your task CPU bound, if it's not the CPU holding up the task then multithreading probably won't help.
How many cores does your processor have and does it have hyperthreading enabled: if it only has one thread, trying to multithread will actually slow it down; if it has more, you can use as many threads in your program as you have available in the OS. (I suggest that you use the Environment.ProcessorCount value to determine the number of threads that you start).
How much cross-thread synchronisation will have to occur. If your threads are spending ages waiting to write into locked shared variables or passing data between them, it will likely slow your application down.
My main suggestion would be to test it!
You can easily time the execution time of a segment of code fairly accurately using the intuitive Stopwatch class.
Finally, you might want to try using a Thread[] rather than background workers, in general, these have lower overheads so are slightly quicker.
I tested the word "stack". One time with only 1 Backgroundworker the second time with 5. Backgroundworker 1 searches for a word with the length of 1. background worker 2 for a word with the length of 2 and so on.
Word: stack
Backgroundworker Time
1 1min 8sec
5 35sec
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My question is the following. What is the better way, if I use more Timers with fewer tasks or I need define fewer Timers with more tasks? With which solution can I achive better performance?
Thank you!
In your specific scenario, you need to consider two things
1. Each timer will run in different thread.
2. Do you need more threads as compared to tasks or not?
Best practices can be as follows:
1. Use Quartz scheduler, so that you dont need to set frequency of each timer specially.
2. Define tasks as jobs and schedule them using cron-expressions.
3. Use TPL for async operations. TPL will allow to automatically create as much threads as you need (if your task is heavy). You can also use await-async to marshal your task on separate thread without stopping your main thread.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a process that connects to a host and infinitely checks if there is new data to process.
My application has close to 500 threads and each thread runs in an infinite loop..
Here's the code :
for(i=1; i<=500; i++)
{
Thread instanceCaller = new Thread(new ThreadStart(infiniteProcess));
instanceCaller.Start();
}
Is there a better way to write this code using C# async. Also, will there be any performance improvements if we use async instead of threadstart.
I want to clarify why I would like to create 500 threads and having a thread pool doesn't work for me.
Basically, each of this thread opens a dedicated socket connection to the host. The host sends a message to this socket which is then routed to the appropriate destination (configured in the DB). The destination could be a hardware device (printer etc., ) or some other device.
We cannot create thread pools because, each of this thread is very active and continuously receives messages from the host and processes them. The overhead of loading and unloading threads from the thread pool is inefficient.
My original application created using threads works well.. But I would like to see if there is any way we can improve the efficiency by taking advantage of new features in C# 5.0.
When you get into the hundreds of threads you should consider replacing blocking by asynchronous IO and asynchronous waiting. Many threads cause high memory usage and OS scheduling overhead.
Try to remove the calls that block the longest first. Apply the 80-20 principle. You don't have to go all async.
That said the overheads associated with this many threads are generally overestimated. Your code will not suddenly become 2x faster if you go all async. Async really only changes the way an operation is started and ended. The operation itself (the IO or the wait) is not accelerated at all.
Also, async does not add capacity. Your CPUs don't become faster, the database cannot handle more operations and the network has a fixed throughput as well. Async IO is really about saving memory and scheduling overhead.
You will not hit OS limits with 500 threads. I recently created 100,000 threads on my machine using testlimits.exe without much trouble. The limits are really high.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am just finding my way around parallel programming in C# and understood the significance of cores and true parallel programming.
But I still have a question:
Say I have a long running task does that mean this will be executed using threads from thread pool and in different cores for true parallel programming.
Or does it depend on the actual delegate that is passed onto the task?
I Hope my question is clear.
The delegate itself makes no difference. It is the TaskScheduler that matters.
The default TaskScheduler will run them via the ThreadPool.. to have them run synchronously, you would pass in a TaskScheduler instance that is currently being used.. such as the static TaskScheduler.FromCurrentSynchronizationContext.
True parallel programming requires multiple cores since threads must execute on separate threads to truly run in parallel.. In a single core system you can only achieve fake parallelism since different treads must share the core through allocated time slots. Other treads are waiting while the current thread is running on the single core.