Differences between using one executable with threads or many executables [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 months ago.
Improve this question
Reading about how does the OS handle the executables, I couldn't figure out if is better to use one executable with many threads inside or use many independent executables. The same task is performed, but I need to process many requests. Is there an executable limits to run simultaneous threds? Does it matter or every OS task goes to same CPU queue and doesnt matter the source executable?
Which is better? One with many threads or many executables? If could give an explanation or share some doc I would be greateful.

As far as the OS scheduling them to CPUs, there's no difference; modern mainstream OSes (Windows / Linux / MacOS) use a 1:1 thread model so every user-space thread is a separately schedulable OS task, not "green threads".
Solaris did or does have an N:M thread model, where user-space threads can be scheduled onto a "thread pool" of OS-level threads, with user-space context switching in some cases, but most other OSes don't.
So either way can take full advantage of all the CPU cores in the system; which is better depends on the use-case. Threads of a single process share memory (and file descriptors) with each other, and are cheaper to create than new processes. But still not that cheap; often you want to have a pool of worker threads that you wake up or queue work for, not start a new thread or process when a new request comes in.

Related

Is opening a thread in C# related to a CPU thread? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am recording multiple usb cameras using a 3rd party library. For that I record each camera’s data on a separate thread in C#. The problem is that application sometimes fails to fetch all the data.
Therefore I wonder if opening the C# threads might block my CPU threads as my CPU is 4 core / 4 threads. Are CPU cores/threads related to threads we initialize in C#?
Well, it depends on how you're going to accomplish this task. Recording camera video probably comes as a functionality of some 3rd party library, and that lib's API may already need your UI (main) thread in order to do do a task. If you're implementing your own low-level recording API and wish to receive data from that API then you may want to run data fetching in a separate thread simply using:
Task.Run(()=> {
// new thread running - your data fetching code here
});
This way, your main thread won't be blocked and awaiting on the new thread will yield the results from your camera API.
That totally depends on the way you are using the thread. I can think of at least 3 different scenarios - 1. Your cameraThread is defined as a high priority, and as such (even with the time slicing) takes 99% of the time. 2. Your cameraThread is run with the tasks thread pool, and as such it is being blocked and is blocking at random with other threads (resource contention). and 3. your camera recorder is happening in a low priority in the background.

Brute force on several BackgroundWorkers faster? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
My C# Brute Force programm uses 20% of CPU in runtime. Im only using 1 Backgroundworker for it. If I would spread this task into several parts and insert them in 7-8 Backgroundworkers, would that be faster and use more CPU?
The short answer is maybe.
The long answer is that it depends upon multiple factors:
Is your task CPU bound, if it's not the CPU holding up the task then multithreading probably won't help.
How many cores does your processor have and does it have hyperthreading enabled: if it only has one thread, trying to multithread will actually slow it down; if it has more, you can use as many threads in your program as you have available in the OS. (I suggest that you use the Environment.ProcessorCount value to determine the number of threads that you start).
How much cross-thread synchronisation will have to occur. If your threads are spending ages waiting to write into locked shared variables or passing data between them, it will likely slow your application down.
My main suggestion would be to test it!
You can easily time the execution time of a segment of code fairly accurately using the intuitive Stopwatch class.
Finally, you might want to try using a Thread[] rather than background workers, in general, these have lower overheads so are slightly quicker.
I tested the word "stack". One time with only 1 Backgroundworker the second time with 5. Backgroundworker 1 searches for a word with the length of 1. background worker 2 for a word with the length of 2 and so on.
Word: stack
Backgroundworker Time
1 1min 8sec
5 35sec

Why does the Thread Pool manage threads this way? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Joe Albahari provides a great explanation of the .NET Thread Pool's automatic thread management, and why it works the way it does, in his Threading in C# e-book.
From what I understand, by default, after occupying all of a processor's cores, the thread pool delays the creation of new threads, because if all processor cores are busy doing computations, creating new threads can no longer improve the overall throughput (tasks completed per second) of the application, and new threads are just a waste of system resources.
However, if a task sits in the thread pool queue for too long, the thread pool assumes that a pooled thread is idling or blocked in some way, and tries to take advantage of the downtime by running the task concurrently.
Rather than this "delay" algorithm, wouldn't it make more sense, in many situations, to employ a technique whereby thread pool threads have a special property that signals a "waiting" state? It might look something like this:
System.Threading.Thread.CurrentThread.IsWaiting = true;
The thread pool would create new threads instantly for queued tasks until all processor cores are occupied with non-waiting threads. Then, tasks are held in the queue until a thread either finishes, OR signals a waiting state.
This would have a couple benefits. First, if a processor core is idle, tasks are always started the instant they are queued to the pool, with no delay. Second, in an application that runs a lot of computationally-intensive tasks that take more than half a second to complete, the thread pool won't continue to burden the system with unnecessary extra threads.
Of course, there may be some situations in which an application needs to finish tasks within a strict deadline, and can't wait for other tasks to finish first. This algorithm may not work for those applications. Otherwise, I imagine that it will only improve efficiency of multithreaded applications.
What do you think?
We have this information available in the Thread.ThreadState property. But it would not be a good idea for a thread pool to use this information. To use it, we would need communication between threads (the ones in the thread pool, and another gathering the information). That would mean some needs of synchronization, or at least volatile access. Both is really expensive. Thus, we would give a runtime burden to all applications of the ThreadPool, whereas only a few would benefit.
As a programmer, you have to reflect how your thread pool is used. If the standard behavior is not suitable for you, you can tweak the pool. E.g. using ThreadPool.SetMinThreads, if you know you have a lot of waiting threads. It would not be as automatic as you wish. But your automatization would also not be perfect, since we could have easily too many threads running, when some of the waiting threads wake up simultaneously.
Note, that other thread pools do not have the quite clever extension heurisitc at all, which is build into the C# variant. Normally, you have a fixed number of running threads, and you will never have more than this number running.

C# async for infinite loops [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a process that connects to a host and infinitely checks if there is new data to process.
My application has close to 500 threads and each thread runs in an infinite loop..
Here's the code :
for(i=1; i<=500; i++)
{
Thread instanceCaller = new Thread(new ThreadStart(infiniteProcess));
instanceCaller.Start();
}
Is there a better way to write this code using C# async. Also, will there be any performance improvements if we use async instead of threadstart.
I want to clarify why I would like to create 500 threads and having a thread pool doesn't work for me.
Basically, each of this thread opens a dedicated socket connection to the host. The host sends a message to this socket which is then routed to the appropriate destination (configured in the DB). The destination could be a hardware device (printer etc., ) or some other device.
We cannot create thread pools because, each of this thread is very active and continuously receives messages from the host and processes them. The overhead of loading and unloading threads from the thread pool is inefficient.
My original application created using threads works well.. But I would like to see if there is any way we can improve the efficiency by taking advantage of new features in C# 5.0.
When you get into the hundreds of threads you should consider replacing blocking by asynchronous IO and asynchronous waiting. Many threads cause high memory usage and OS scheduling overhead.
Try to remove the calls that block the longest first. Apply the 80-20 principle. You don't have to go all async.
That said the overheads associated with this many threads are generally overestimated. Your code will not suddenly become 2x faster if you go all async. Async really only changes the way an operation is started and ended. The operation itself (the IO or the wait) is not accelerated at all.
Also, async does not add capacity. Your CPUs don't become faster, the database cannot handle more operations and the network has a fixed throughput as well. Async IO is really about saving memory and scheduling overhead.
You will not hit OS limits with 500 threads. I recently created 100,000 threads on my machine using testlimits.exe without much trouble. The limits are really high.

Slow image rendering while performing long task on TPL [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 8 years ago.
Improve this question
I am developing a WPF program which performs long running tasks in the background, using TPL's Parallel.ForEach.
I have an image control and when am trying to show an image, it seems that the image rendering happens in the thread pool, and since I use it intensively (via TPL Forech) the rendering is extremely slow.
Is there a way to perform the image rendering in higher priority or not in the thread pool ?
Parallel.ForEach is designed to use as much of your CPU as it can get hold of. If you don't want it to do that, then either don't use it (just run the long-running task in a single poll thread), or control the amount of your CPU which 'Parallel.ForEach' uses, by passing in a 'ParallelOptions' object to limit its appetite for CPU.
You also say you're running 'long running' tasks using Parallel.ForEach - bear in mind that it's designed for getting cpu-bound tasks finished quickly, not backgrounding things which are long-running because they're waiting for I/O (for example). From the symptoms though, it does sound like you're using it for the right thing, it's just that it's using more CPU than you want.
As far as trying to avoid the thread pool, I think you're barking up the wrong tree - it's just a collection of threads which already exist, designed to avoid the overhead of creating and destroying threads all the time - it's not a priority mechanism - what your two activities are fighting over is access to the CPU, not access to the thread pool.

Categories