I am wondering a bit at the moment. I was just reading a bit about Threads and landed there: Task vs Thread differences [duplicate] here on stackoverflow from Jacek (sorry cant create a link because i can only make 2 with reputation<10)
and the first Comment from MoonKnight led me there: albahari.com/threading
i have taken the code and changed it a little to make it better read able what is happening. Here comes my changed code:
static void Main()
{
Thread t = new Thread(WriteY); // Kick off a new thread
t.Start(); // running WriteY()
// Simultaneously, do something on the main thread.
for (int i = 0; i < 10; i++) { System.Threading.Thread.Sleep(1); Console.Write(i); };
Console.ReadLine();
}
static void WriteY()
{
for (int y = 0; y < 10; y++) { System.Threading.Thread.Sleep(1); Console.Write(y); };
Console.ReadLine();
}
what I expected to happen (and what happens most of the time) was this:
Good Thread:
but here is the thing I am wondering about(its absolutely random and promised the same code):
miracle thread:
my questions:
1.How can this happen that there are different numbers the threads should always run at the same time shouldnt they?
2.all this gets more crazy the lower the sleep time gets so if you remove it completely it fells absolutely random
When you execute the first loop on the main thread and start WriteY() on a separate thread, there is absolutely no way to predict the sequence in which events in one thread will happen relative to events in the other thread.
I've written a few tests to demonstrate this. Here's one. And here's another.
What characterizes both of these examples is that very often they will run in the "expected" sequence, but once in a while they won't.
That tells us a few things about multithreaded operations:
Concurrencty or parallel execution is beneficial when we want to distribute work across threads, but not when events must occur in a predictable sequence.
It requires extra caution because it if we do it wrong it might seem to work anyway. And then once in a while it won't work. Those occasions when it doesn't work will be extremely difficult to debug, one reason being that you won't be able to get the behavior to repeat when you want to.
Related
I tried running a code that uses a ThreadStatic attribute and for some reason different results are being displayed.
[ThreadStatic]
public static int _field;
public static void Main(string[] args)
{
new Thread(() =>
{
for(int x = 0; x < 10; x++)
{
_field++;
Console.WriteLine("Thread A: {0}", _field);
}
}).Start();
new Thread(() =>
{
for(int x = 0; x < 10; x++)
{
_field++;
Console.WriteLine("Thread B: {0}", _field);
}
}).Start();
Console.ReadKey();
}
Result 1:
Result 2:
Can anyone explain to me why? Thank you!
When you execute code on multiple threads the order of execution becomes somewhat unpredictable. You might get the exact same result over and over, but then it will do something different.
That inconsistent behavior is fine as long as you don't depend on consistent behavior. Think of it like two people painting a building - one starts on the back and one starts on the front because it's faster and because it's not critical that one finish before the other.
This DotNetFiddle demonstrates. It puts a bunch of consecutive numbers in a ConcurrentQueue, and then uses multiple threads to move them first-in-first-out into another queue. You might expect that they would always arrive in the second queue in the same order, and more often than not they do. But once in a while they don't.
It's very important to be aware of this behavior. Otherwise we can write multithreaded code, we test it and it seems to work one way, then later we get unpredictable results that happen once in a while but we can't figure out why and we can't repeat it when debugging. If that happens then the problem can be very difficult to find. But that's only a problem if we depend on behavior that isn't predictable.
Because you have no control over the timing of when a thread gets a CPU timeslice. So the order will be different for each run.
Just for fun I created a mandelbrot program. I'm now trying to make it multithreaded by splitting the image into two left/right parts to be handled by two threads. However, the application crashes as soon as it's launched (although based on my console output, the first thread continues after the crash but the second thread never starts) and I'm unsure what to do.
The crash is at the line this.output[x][y] = this.calculate_pixel_rgb(x, y); and says I'm missing an object reference, which I don't understand because it works for the first thread.
public void compute_all()
{
this.setcolors(); this.zoom_multiplier = (4 / this.zoom / this.resolution);
Thread thread1 = new Thread(new ParameterizedThreadStart(computation_thread)); thread1.Start(new double[] { 0, 0.5 });
Thread thread2 = new Thread(new ParameterizedThreadStart(computation_thread)); thread2.Start(new double[] { 0.5, 1 });
thread1.Join(); thread2.Join();
}
public void computation_thread(object threadinfo)
{
double[] parameters = (double[])threadinfo;
this.output = new int[this.resolution][][];
for (int x = (int)(this.resolution * parameters[0]); x < (int)(this.resolution * parameters[1]); x++)
{
this.output[x] = new int[this.resolution][];
for (int y = 0; y < this.resolution; y++)
{
this.output[x][y] = this.calculate_pixel_rgb(x, y);
this.pixels_completed++;
}
}
}
Your two threads are manipulating the same output buffer, overwriting each other. Don't share memory between threads if you can possibly avoid it; all it causes is grief.
If the point of this exercise is to learn how to manipulate raw threads then take a step back and study why sharing memory across two threads is a bad idea.
If the point of this exercise is to parallelize the computation of a fractal then forget about manipulating raw threads. You will do much better to learn how to use the Task Parallel Library.
Threads are logically workers, and who wants to manage a bunch of workers? The TPL encourages you to see parallization as the manipulation of tasks that can be done in parallel. Let the TPL take care of figuring out how many workers to assign to your tasks.
The problem in your code is initializing this.output multiple times (once for each thread).
Both of your threads use the same this and when the first thread initialized this.output's columns, the second thread re-initializes it, and the first thread loses its allocated memory.
So, this.output[x] will not be existed in the first thread anymore (missing an object reference exception).
This also explains why your code runs flawlessly with just one thread.
The easy solution is to initialize the whole array at the very beginning.
Your logic seems fishy.
But if you believe it is correct and resetting this.output data is necessary in each thread, then do the following:
Make temporary array
Change [x][y] to [x,y]
Change [][] to [,]
Apply lock before setting to this.output
So I have been playing around with threads for the last couple months and while my output is as expected i have a feeling I'm not doing this the best way. I can't seem to get a straight answer from anyone i work with on what is best practice so i thought i would ask you guys.
Question: I'm going to try to make this simple so bear with me. Say i have a form that has a start and stop button. The start button fires and event that starts a thread. Inside this thread's DoWork it is going to call 3 methods. Method1() prints to the console "A\n" 10 times with a pause of 10 seconds in between. Method2() and Method3() are the exact same just different letter and different pause times in between Console.WriteLine. Now when you press the stop button you want the response to be immediate. I don't want to have to wait for the methods to complete. How do i go about this?
The way i have been doing this is passing my BackgroundWorker to each method and checking the worker.CancellationPending like so
public void Method1(BackgroundWorker worker)
{
for(int i = 0; i < 10 && !worker.CancellationPending; ++i)
{
Console.WriteLine("A");
for(int j = 0; j < 100 && !worker.CancellationPending; ++i)
{
Thread.Sleep(100);
}
}
}
Like i said this give me the desired result however imagine that method1 becomes a lot more complex, let say it is using a DLL to write that has a keydown and a key up. If i just abort the thread i could possibly leave myself in an undesired state as well. I find myself littering my code with !worker.CancellationPending. Practically every code block i am checking CancellationPending. I look at a lot of example on line and i rarely see people passing a thread around like i am. What is best practices on this?
Consider using iterators (yield return) to break up the steps.
public void Method1(Backgroundworker worker)
{
foreach (var discard in Method1Steps)
{
if (worker.CancelationPending)
return;
}
}
private IEnumerable<object> Method1Steps()
{
for (int i = 0; i < 10; ++i)
{
yield return null;
Console.WriteLine("A");
for (int j = 0; j < 100; ++i)
{
Thread.Sleep(100);
yield return null;
}
}
}
This solution may be harder to implement if you have a bunch of try/catch/finally or a bunch of method calls that also need to know about cancelation.
Yes, you are doing it correctly. It may seem awkward at first, but it really is the best option. It is definitely far better than aborting a thread. Loop iterations, as you have discovered, are ideal candidates for checking CancelationPending. This is because a loop iteration often isolates a logical unit of work and thus easily delineate a safe point. Safe points are markers in the execution of a thread where termination can be easily accomplished without corrupting any data.
The trick is to poll CancelationPending at safe points frequently enough to provide timely feedback to the caller that cancelation completed successfully, but not too frequently to negatively effect performance or as to "litter the code".
In your specific case the inner loop is the best place to poll CancelationPending. I would omit the check on the outer loop. The reason is because the inner loop is where most of the time is spent. The check on the outer loop would be pointless because the outer loop does very little actual work except to get the inner loop going.
Now, on the GUI side you might want to grey out the stop button to let the user know that the cancelation request was accepted. You could display a message like "cancelation pending" or the like to make it clear. Once you get the feedback that cancelation is complete then you could remove the message.
Well, if you are in the situation where you have to abort a CPU-intensive thread, then you are somwhat stuck with testing an 'Abort' boolean, (or cancellation token), in one loop or another, (maybee not the innermost one - depends on how long this takes). AFAIK, you can just 'return' from the inner loop, so exiting the method - no need to check at every level! To minimize the overhead on this, try to make it a local-ish boolean, ie try not to dereference it through half-a-dozen ...... classes every time .
Maybee inherit classes from 'Stoppable', that has an 'Abort' method and a 'Stop' boolean? You example thread above is spending most time sleeping, so you get 50ms average latency before you get to check anything. In such a case, you could wait on some event with a timeout instead of sleeping. Override 'Abort' to set the event as well as calling the inherited Abort & so terminate the wait early. You could also set the event in the cancellationToken delegate/callback, should you implement this new functionality as described by Dan.
There are acually very few Windows API etc. that are not easily 'unstickable' or don't have asynchronous, 'Ex' versions, so it's err.. 'nearly' always possible to cancel, one way or another, eg. closing sockets to force a socket read to except, writing temporary file to force Folder Change Notifications to return.
Rgds,
Martin
I'm studying C# right now and currently learning threading.
Here is a simple example to adding 1 to a variable multiple times within different threads.
The book suggested I can use Interlocked.increment(ref number) to replace the number += 1 within the AddOne method, therefore the value will be locked until it's updated within the thread. So the output will be 1000, 2000, ..... 10000 as expected. But My output is still 999, 1999, 2999, ...... 9999.
Only after I uncomment the Thread.Sleep(1000) line will the output be correct but even without the Interlocked been used.
Can anyone explain what's happening here?
static void Main(string[] args)
{
myNum n = new myNum();
for (int i = 0;i<10; Interlocked.Increment(ref i))
{
for(int a =1;a<=1000; Interlocked.Increment(ref a))
{
Thread t = new Thread( new ThreadStart( n.AddOne));
t.Start();
}
//Thread.Sleep(1000);
Console.WriteLine(n.number);
}
}
class myNum
{
public int number = 0;
public void AddOne()
{
//number += 1;
Interlocked.Increment(ref number);
}
}
You are printing out the value before all of the threads have finished executing. You need to join all of the threads before printing.
for(int a = 0; a < 1000; a++)
{
t[a].Join();
}
You'll need to store the threads in an array or list. Also, you don't need the interlocked instruction in any of the for loops. They all run in only one thread (the main thread). Only the code in AddOne runs in multiple threads and hence needs to by synchronized.
It a bit strange for me what you trying to achieve with this code. You are using Interlocked.Increment everywhere without explicit needs for it.
Interlocked.Increment required for access to values which can be accessed from different threads. In your code it is only number, so you don't require it for i and a, just use as usually i++ and a++
The problem you are asking for is that you just don't wait for all threads you started are completed its job. Take a look to Thread.Join() method. You have to wait while all of threads you are started completes its work.
In this simple test you are done with Thread.Sleep(1000); you do similar wait but its not correct to assume that all threads are complete in 1000 ms, so just use Thread.Join() for that.
If you modify your AddOne() method so it starts to executes longer (e.g. add Thread.Sleep(1000) to it) you'll notice that Thread.Sleep(1000); doesn't help any more.
I'll suggest to read more about ThreadPool vs Threads. Also take a look to Patterns for Parallel Programming: Understanding and Applying Parallel Patterns with the .NET Framework 4
I am trying to use ThreadPool.RegisterWaitForSingleObject to add a timer to a set of threads. I create 9 threads and am trying to give each of them an equal chance of operation as at the moment there seems to be a little starvation going on if I just add them to the thread pool. I am also trying to implement a manual reset event as I want all 9 threads to exit before continuing.
What is the best way to ensure that each thread in the threadpool gets an equal chance at running as the function that I am calling has a loop and it seems that each thread (or whichever one runs first) gets stuck in it and the others don't get a chance to run.
resetEvents = new ManualResetEvent[table_seats];
//Spawn 9 threads
for (int i = 0; i < table_seats; i++)
{
resetEvents[i] = new ManualResetEvent(false);
//AutoResetEvent ev = new AutoResetEvent(false);
RegisteredWaitHandle handle = ThreadPool.RegisterWaitForSingleObject(autoEvent, ObserveSeat, (object)i, 100, false);
}
//wait for threads to exit
WaitHandle.WaitAll(resetEvents);
However, it doesn't matter if I use resetEvents[] or ev neither seem to work properly. Am I able to implement this or am I (probably) misunderstanding how they should work.
Thanks, R.
I would not use the RegisterWaitForSingleObject for this purpose. The patterns I am going to describe here require the Reactive Extensions download since you are using .NET v3.5.
First, to wait for all work items from the ThreadPool to complete use the CountdownEvent class. This is a lot more elegant and scalable than using multiple ManualResetEvent instances. Plus, the WaitHandle.WaitAll method is limited to 64 handles.
var finished = new CountdownEvent(1);
for (int i = 0; i < table_seats; i++)
{
finished.AddCount();
ThreadPool.QueueUserWorkItem(ObserveSeat);
(state) =>
{
try
{
ObserveSeat(state);
}
finally
{
finished.Signal();
}
}, i);
}
finished.Signal();
finished.Wait();
Second, you could try calling Thread.Sleep(0) after several iterations of the loop to force a context switch so that the current thread yields to another. If you want a considerably more complex coordination strategy then use the Barrier class. Add another parameter to your ObserveSeat function which accepts this synchronization mechanism. You could supply it by capturing it in the lambda expression in the code above.
public void ObserveSeat(object state, Barrier barrier)
{
barrier.AddParticipant();
try
{
for (int i = 0; i < NUM_ITERATIONS; i++)
{
if (i % AMOUNT == 0)
{
// Let the other threads know we are done with this phase and wait
// for them to catch up.
barrier.SignalAndWait();
}
// Perform your work here.
}
}
finally
{
barrier.RemoveParticipant();
}
}
Note that although this approach would certainly prevent the starvation issue it might limit the throughput of the threads. Calling SignalAndWait too much might cause a lot of unnecessary context switching, but calling it too little might cause a lot of unnecessary waiting. You would probably have to tune AMOUNT to get the optimal balance of throughput and starvation. I suspect there might be a simple way to do the tuning dynamically.