I create 3 Threads in main with 3 different method.
Thread t1 = new Thread(FirstThread);
Thread t2 = new Thread(SecondThread);
Thread t3 = new Thread(ThirdThread);
I need measured time on every thread and search less of them.
The time is measured with Stopwatch. I didn't find another way(in c/c++ i know method GetThreadTimes).
static void FirstThread()
{
Stopwatch stopwatch = Stopwatch.StartNew();
try
{
Enumerable.Range(1, 10000).Select(x => x).Sum();
}
finally
{
stopwatch.Stop();
Console.WriteLine(stopwatch.Elapsed);
}
}
After (on task) Dynamic change Thread's priority and again start method example:
thr2.priority = ThreadPriority.Highest;
I don't understand as i could begun threads with changed priority if this is IMPOSSIBLE, because thread start 1 time.
Out on display time with normal and low priority all threads.
p.s. I think that Stopwatch compared variable cast to double or I don't see any other way...
double timeInSecondsPerN = Stopwatch1.Elapsed.TotalMilliseconds
The main question to look the work of threads with different priorities in the program runtime
Thread priority will not necessarily make a task faster or accomplish 'more work'
If you set 3 threads to all be priority highest - then they will all be competing for CPU cycles at the same speed/rate, probably the same speed as if they were all thread priority BelowNormal.
All it means is that if your application gets to the point of being deadlocked and competing for CPU Cycles, it knows which thread to satisfy first.
This feature of .Net loses its benefits if you set all threads to the same high value. The feature only has value, if you use the Higher priority settings for threads that genuinely need to be satisfied before all others within your application.
Eg: Important computational tasks with low IO usage. (Encryption, Cryptocurrency, Hashing etc)
If your application is not getting to the point of being thread-locked or utilizing 100% of a cpu core, then the priority feature will never kick in.
The Microsoft website on Thread.Priority demonstrates the effect of priority well.
https://msdn.microsoft.com/en-us/library/system.threading.thread.priority(v=vs.110).aspx
using System;
using System.Threading;
using Timers = System.Timers;
class Test
{
static void Main()
{
PriorityTest priorityTest = new PriorityTest();
Thread thread1 = new Thread(priorityTest.ThreadMethod);
thread1.Name = "ThreadOne";
Thread thread2 = new Thread(priorityTest.ThreadMethod);
thread2.Name = "ThreadTwo";
thread2.Priority = ThreadPriority.BelowNormal;
Thread thread3 = new Thread(priorityTest.ThreadMethod);
thread3.Name = "ThreadThree";
thread3.Priority = ThreadPriority.AboveNormal;
thread1.Start();
thread2.Start();
thread3.Start();
// Allow counting for 10 seconds.
Thread.Sleep(10000);
priorityTest.LoopSwitch = false;
}
}
class PriorityTest
{
static bool loopSwitch;
[ThreadStatic] static long threadCount = 0;
public PriorityTest()
{
loopSwitch = true;
}
public bool LoopSwitch
{
set{ loopSwitch = value; }
}
public void ThreadMethod()
{
while(loopSwitch)
{
threadCount++;
}
Console.WriteLine("{0,-11} with {1,11} priority " +
"has a count = {2,13}", Thread.CurrentThread.Name,
Thread.CurrentThread.Priority.ToString(),
threadCount.ToString("N0"));
}
}
// The example displays output like the following:
// ThreadOne with Normal priority has a count = 755,897,581
// ThreadThree with AboveNormal priority has a count = 778,099,094
// ThreadTwo with BelowNormal priority has a count = 7,840,984
Related
I am trying to understand some code (for performance reasons) that is processing tasks from a queue. The code is C# .NET Framework 4.8 (And I didn't write this stuff)
I have this code creating a timer that from what I can tell should use a new thread every 10 seconds
_myTimer = new Timer(new TimerCallback(OnTimerGo), null, 0, 10000 );
Inside the onTimerGo it calls DoTask() inside of DoTask() it grabs a task off a queue and then does this
System.Threading.Tasks.Task.Factory.StartNew(ProcessTask, task).ContinueWith(c => DoTask());
My reading of this is that a new thread should start running OnTimerGo every 10 seconds, and that thread should in parralel run ProcessTask on tasks as fast as it can get them from the queue.
I inserted some code to call ThreadPool.GetMaxThreads and ThreadPool.GetAvailableThreads to figure out how many threads were in use. Then I queued up 10,000 things for it to do and let it loose.
I never see more then 4 threads in use at a time. This is running on a c4.4xlarge ec2 instance... so 16 vCPU 30 gb mem. The get max and available return over 2k. So I would expect more threads. By looking at the logging I can see that a total of 50ish different threads (by thread id) end up doing the work over the course of 20 minutes. Since the timer is set to every 10 seconds, I would expect 100 threads to be doing the work (or for it to finish sooner).
Looking at the code, the only time a running thread should stop is if it asks for a task from the queue and doesn't get one. Some other logging shows that there are never more than 2 tasks running in a thread. This is probably because they work is pretty fast. So the threads shouldn't be exiting, and I can even see from the logs that many of them end up doing as many as 500 tasks over the 20 minutes.
so... what am I missing here. Are the ThreadPool.GetMaxThreads and ThreadPool.GetAvailableThreads not accurate if run from inside a thread? Is something shutting down some of the threads while letting others keep going?
EDIT: adding more code
public static void StartScheduler()
{
lock (TimerLock)
{
if (_timerShutdown == false)
{
_myTimer = new Timer(new TimerCallback(OnTimerGo), null, 0, 10 );
const int numberOfSecondsPerMinute = 60;
const int margin = 1;
var pollEventsPerMinute = (numberOfSecondsPerMinute/SystemPreferences.TaskPollingIntervalSeconds);
_numberOfTimerCallsForHeartbeat = pollEventsPerMinute - margin;
}
}
}
private static void OnTimerGo(object state)
{
try
{
_lastTimer = DateTime.UtcNow;
var currentTickCount = Interlocked.Increment(ref _timerCallCount);
if (currentTickCount == _numberOfTimerCallsForHeartbeat)
{
Interlocked.Exchange(ref _timerCallCount, 0);
MonitoringTools.SendHeartbeatMetric(Heartbeat);
}
CheckForTasks();
}
catch (Exception e)
{
Log.Warn("Scheduler: OnTimerGo exception", e);
}
}
public static void CheckForTasks()
{
try
{
if (DoTask())
_lastStart = DateTime.UtcNow;
_lastStartOrCheck = DateTime.UtcNow;
}
catch (Exception e)
{
Log.Error("Unexpected exception checking for tasks", e);
}
}
private static bool DoTask()
{
Func<DataContext, bool> a = db =>
{
var mtid = Thread.CurrentThread.ManagedThreadId;
int totalThreads = Process.GetCurrentProcess().Threads.Count;
int maxWorkerThreads;
int maxPortThreads;
ThreadPool.GetMaxThreads(out maxWorkerThreads, out maxPortThreads);
int AvailableWorkerThreads;
int AvailablePortThreads;
ThreadPool.GetAvailableThreads(out AvailableWorkerThreads, out AvailablePortThreads);
int usedWorkerThreads = maxWorkerThreads - AvailableWorkerThreads;
string usedThreadMessage = $"Thread {mtid}: Threads in Use count: {usedWorkerThreads}";
Log.Info(usedThreadMessage);
var taskTypeAndTasks = GetTaskListTypeAndTasks();
var task = GetNextTask(db, taskTypeAndTasks.Key, taskTypeAndTasks.Value);
if (_timerShutdown)
{
Log.Debug("Task processing stopped.");
return false;
}
if (task == null)
{
Log.DebugFormat("DoTask: Idle in thread {0} ({1} tasks running)", mtid, _processingTaskLock);
return false;
}
Log.DebugFormat("DoTask: starting task {2}:{0} on thread {1}", task.Id, mtid, task.Class);
System.Threading.Tasks.Task.Factory.StartNew(ProcessTask, task).ContinueWith(c => DoTask());
Log.DebugFormat("DoTask: done ({0})", mtid);
return true;
};
return DbExtensions.WithDbWrite(ctx => a(ctx));
}
The Task.Factory.StartNew by default doesn't create a new thread. It borrows a thread from the ThreadPool instead.
The ThreadPool is intended as a small pool of reusable threads, to help amortize the cost of running frequent and lightweight operations like callbacks, continuations, event handers etc. Depleting the ThreadPool from available workers by scheduling too much work on it, results in a situation that is called saturation or starvation. And as you've already figured out, it's not a happy situation to be.
You can prevent the saturation of the ThreadPool by running your long-running work on dedicated threads instead of ThreadPool threads. This can be done by passing the TaskCreationOptions.LongRunning as argument to the Task.Factory.StartNew:
_ = Task.Factory.StartNew(ProcessTask, task, CancellationToken.None,
TaskCreationOptions.LongRunning,
TaskScheduler.Default).ContinueWith(t => DoTask(), CancellationToken.None,
TaskContinuationOptions.ExecuteSynchronously,
TaskScheduler.Default);
The above code schedules the ProcessTask(task) on a new thread, and after the invocation is completed either successfully or unsuccessfully, the DoTask will be invoked on the same thread. Finally the thread will be terminated. The discard _ signifies that the continuation Task (the task returned by the ContinueWith) is fire-and-forget. Which, to put it mildly, is architecturally suspicious. 😃
In case you are wondering why I pass the TaskScheduler.Default explicitly as argument to StartNew and ContinueWith, check out this link.
My reading of this is that a new thread should start running OnTimerGo every 10 seconds, and that thread should in parralel run ProcessTask on tasks as fast as it can get them from the queue.
Well, that is definitely not what's happening. It's a lot of uncertainty about your code, but it's clear that another DoTask is starting AFTER ProcessTask completes. And that is not parallel execution. Your line of code is this
System.Threading.Tasks.Task.Factory.StartNew(ProcessTask, task).ContinueWith(c => DoTask());
I suggest you to start another DoTask right there like this:
System.Threading.Tasks.Task.Factory.StartNew(ProcessTask, task);
DoTask();
Make sure your code is ready for parallel execution, though.
I am teaching myself C# from my usual C++ programming and now I'm doing threads.
The following simple code compiles fine and should output beeps on a loop via threads for 30 seconds.
using System;
using System.Runtime.InteropServices;
using System.Threading;
class BeepSample
{
[DllImport("kernel32.dll", SetLastError=true)]
static extern bool Beep(uint dwFreq, uint dwDuration);
static void Main()
{
Console.WriteLine("Testing PC speaker...");
for (uint i = 100; i <= 20000; i++)
{
var BeepThread = new Thread(delegate() { Beep(i, 30000); });
}
Console.WriteLine("Testing complete.");
Console.ReadLine();
}
}
Only problem is the threads don't seem to work.
I know I am missing something basic.
Any ideas?
Your forgot to start thread MSDN link
for (uint i = 100; i <= 20000; i++)
{
var BeepThread = new Thread(delegate() { Beep(i, 30000); });
BeepThread.Start();
}
However that looks suspicious. Why would you need 19900 threads? Probably you want to have 1 thread, that has a loop inside and pauses for short periods to output different frequency through beeper.
Only problem is the threads don't seem to work.
This aspect is clear from the part that you have not started the threads for them to do anything
Code has many other issues:
Closure issue, needs further modification as
for (uint i = 100; i <= 20000; i++)
{
int icopy = i;
var BeepThread = new Thread(delegate() { Beep(icopy, 30000); });
BeepThread.Start();
}
Thread class will start the foreground threads and each logical processor core has capacity to process one thread at a time and each Thread is a very costly resource for the computation and memory as it needs allocation of Thread Environment Block, Kernel Stack Memory, User Stack Memory, the current code even if it runs, will kill your system and you mostly have to kill the process to come out of it
Console.ReadLine(); will only block the Main Thread / Ui Thread, others threads being foreground will go on even if Main thread / Ui thread exits and not blocked, ideal way to block is calling Join on each Thread object, which will ask Main thread to wait till its complete
One of the preferred way to re-write the same code is using the Task Parallel Library:
Parallel.For(100, 20000,
, new ParallelOptions { MaxDegreeOfParallelism =
Environment.ProcessorCount }
i =>
{
int icopy = i;
Beep(icopy, 30000);
});
Benefits:
Code doesn't create so many threads and kill the system
Works on thread pool (Background threads) and use only required number of threads are invoked and max number never exceeds the Processor count of the system and would be mostly far lesser, as threads are reused since there's no major long running computation
Automatically blocks Main thread / Ui Thread
Ok, thanks guys. Had gone for lunch.
I implemented...
for (uint i = 500; i <= 550; i++)
{
uint icopy = i;
var BeepThread = new Thread(delegate() { Beep(icopy, 30000); });
BeepThread.Start();
}
Which worked great.
As predicted the threads did not terminate after the main thread was executed but it does what I want which is awesome.
Bless y'all.
Consider this :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace Multithreading
{
class Program
{
static int result = 0;
static void changeResult1()
{
result = 1;
}
static void changeResult2()
{
result = 2;
}
static void changeResult3()
{
result = 3;
}
static void Main(string[] args)
{
Thread t1 = new Thread(new ThreadStart(changeResult1));
Thread t2 = new Thread(new ThreadStart(changeResult2));
Thread t3 = new Thread(new ThreadStart(changeResult3));
t1.Start();
t2.Start();
t3.Start();
Console.WriteLine(result);
}
}
}
I'm pretty sure that this code is NOT synchronized , meaning that result should be different each execution of the code (0,1,2,3) . From my point of view the result can even be 0 , if the Main thread is done before either one of the thread had even started .
However I'm repeatedly getting 2 as the result on screen .
Why ?
Is this code correctly synchronized ?
No.
meaning that result should be different each execution of the code (0,1,2,3) .
Why should that be true? You provide no justification whatsoever for this claim.
The code is incorrectly synchronized which means that there are many possible results. You are taking a fact -- the lack of correct synchronization means I don't know what will happen -- and deducing from that a completely unsupported conclusion -- the observations will be different on every execution. The correct deduction from "I don't know what will happen" is that you don't know what will happen on any execution; in particular you do not know that the behaviour over a large set of runs will have any particular distribution.
Why ?
Why what? You noted that 2 was a possible result, you are getting a possible result. You do the same thing twice and the same result happens; that's not surprising. Just because the runtime is permitted to produce many different results does not imply that it must produce many different results. Doing the same thing twice generally results in pretty similar outcomes.
It seems to me that the observed outcome is quite reasonable.
Starting threads is an expensive operation. I would expect that the start-up time of each of the threads to vastly outweigh the time taken to actually run the code assigned to each thread.
So once thread 1 is set-up, the main thread moves on to setting up thread 2 while thread 1 executes. Thread 1 finishes long before thread 2 is ready to run.
The same with thread 2 & thread 3.
So once thread three is set-up, thread 2 has well-and-truly finished, and the main thread moves immediately to the Console.WriteLine(result);. This is before thread 3 actually has begun, and is a long, long, long time after thread 2 has completed.
So, of course, the results is almost always 2.
To support my unscientific analysis I thought I might add some timing code to see if I could be right.
void Main()
{
times[0] = sw.Elapsed.TotalMilliseconds;
Thread t1 = new Thread(new ThreadStart(changeResult1));
times[1] = sw.Elapsed.TotalMilliseconds;
Thread t2 = new Thread(new ThreadStart(changeResult2));
times[2] = sw.Elapsed.TotalMilliseconds;
Thread t3 = new Thread(new ThreadStart(changeResult3));
times[3] = sw.Elapsed.TotalMilliseconds;
t1.Start();
times[4] = sw.Elapsed.TotalMilliseconds;
t2.Start();
times[5] = sw.Elapsed.TotalMilliseconds;
t3.Start();
times[6] = sw.Elapsed.TotalMilliseconds;
var r = result.ToString();
times[7] = sw.Elapsed.TotalMilliseconds;
Console.WriteLine(result);
times[8] = sw.Elapsed.TotalMilliseconds;
Thread.Sleep(1000);
times
.Select((x, n) => new { t = (x - times[0]).ToString("0.000"), n})
.OrderBy(x => x.t)
.Dump();
}
static Stopwatch sw = Stopwatch.StartNew();
static double[] times = new double[15];
static int result = 0;
static void changeResult1()
{
times[9] = sw.Elapsed.TotalMilliseconds;
result = 1;
times[10] = sw.Elapsed.TotalMilliseconds;
}
static void changeResult2()
{
times[11] = sw.Elapsed.TotalMilliseconds;
result = 2;
times[12] = sw.Elapsed.TotalMilliseconds;
}
static void changeResult3()
{
times[13] = sw.Elapsed.TotalMilliseconds;
result = 3;
times[14] = sw.Elapsed.TotalMilliseconds;
}
You have to follow the bouncing ball a little, but one such run of this code produced this output:
This is quite clearly showing that the code that executes after t3.Start(); is the var r = result.ToString(); and not the result = 3;. More importantly result = 2; has happened a long time before thread 3 has started.
I'm trying to implement an algorithm that should run in parallel using threads or tasks. The difficulty is that I want the threads/tasks to share their best results from time to time with all other threads.
The basic idea is this:
//Accessible from each thread
IProducerConsumerCollection<MyObject> _bestObjects;
//Executed in each thread
DoSomeWork(int n){
MyObject localObject;
for(var i = 0; i < n; i++){
//Do some calculations and store results in localObject
if((i/n)%0.5 == 0)
{
//store localObject in _bestObjects
//wait until each thread has stored its result in _bestObjects
//get the best result from _bestObjects and go on
}
}
}
How can this be achieved using System.Threading or System.Threading.Tasks and is it true that tasks should not be used for long running operations?
Update: Clarification
It's not my problem to have a thread safe collection but to make the threads stop, publish result, wait until all other threads have publihed their results to and then go on again. All threads will run simultaneously.
Cutting a long story short:
Whats better for long running operations? Task or Thread or anything else?
How to communicate between threads/taks to inform each of them about the state of all other assuming that the number of threads is set at runtime (depending on available cores).
Best Regards
Jay
Look at the dollowing example.
public class Worker
{
public SharedData state;
public void Work(SharedData someData)
{
this.state = someData;
while (true) ;
}
}
public class SharedData {
X myX;
public getX() { ... }
public setX(anX) { ... }
}
public class Sharing
{
public static void Main()
{
SharedData data = new SharedDate()
Worker work1 = new Worker(data);
Worker work2 = new Worker(data);
Thread thread = new Thread(new ThreadStart(work1.Work));
thread.start();
Thread thread2 = new Thread(new ThreadStart(work2.Work));
thread2.start();
}
}
bomslang's response is not accurate. Cannot instantiate a new thread with ThreadStart, passing in Work method which requires a parameter to be passed in the above example. ParameterizedThreadStart would be more suitable. The sample code for the Main method would look more like this:
public class Sharing
{
public static void Main()
{
SharedData data = new SharedDate()
Worker work1 = new Worker(data);
Worker work2 = new Worker(data);
Thread thread = new Thread(new ParameterizedThreadStart(work1.Work));
thread.start(someData);
Thread thread2 = new Thread(new ParameterizedThreadStart(work2.Work));
thread2.start(someData);
}
}
Note that 'work' is being passed into the ParameterizedThreadStart as the method for the new thread to execute, and the data required to pass in to the 'work' method is being passed in the call to start. The data must be passed as an object, so the work method will need to cast it back to the appropriate datatype as well. Lastly, there is also another approach to passing in data to a new thread via the use of anonymous methods.
I've got application which has two main task: encoding, processing video.
These tasks are independant.
Each task I would like run with configurable number of threads.
For this reason for one task I usually use ThreadPool and SetMaxThreads. But now I've got two tasks and would like "two configurable(number of threads) threapool for each task".
Well, ThreadPool is a static class. So how can I implement my strategy(easy configurable number of threads for each task).
Thanks
You will probably want your own thread pool. If you are using .NET 4.0 then it is actually fairly easy to roll your own if you use the BlockingCollection class.
public class CustomThreadPool
{
private BlockingCollection<Action> m_WorkItems = new BlockingCollection<Action>();
public CustomThreadPool(int numberOfThreads)
{
for (int i = 0; i < numberOfThreads; i++)
{
var thread = new Thread(
() =>
{
while (true)
{
Action action = m_WorkItems.Take();
action();
}
});
thread.IsBackground = true;
thread.Start();
}
}
public void QueueUserWorkItem(Action action)
{
m_WorkItems.Add(action);
}
}
That is really all there is to it. You would create a CustomThreadPool for each actual pool you want to control. I posted the minimum amount of code to get a crude thread pool going. Naturally, you might want to tweak and expand this implementation to suit your specific need.