I am new to threading concept, using threading first time in my application. One of my application processing multiple data, without threading it is taking approx 2 minutes while with help of threading it is taking just 25 seconds, but i want notification when all threads finishes work.
private int z = 0 ;
startfunction()
{
z= 250;
start1step(z);
}
private void start1step(int i)
{
if (i < 0)
return;
else
{
Thread thread = new Thread(new ThreadStart(WorkThreadFunction));
thread.Start();
start1step( --i);
}
}
public void WorkThreadFunction( )
{
try
{
int x = z ;
z-- ;
// do some background work
if(x ==0)
MessageBox.Show("All Thread finished");
}
catch
{
//
}
}
Above sample code is working perfectly except that notification part, I want notification when all threads finishes background work. There is one last step left which sums up work finished by all these threads.
Please help
There are lots of ways to do this. I would say the two most convenient methods that remain similar to your current implementation involve using a CountDownEvent or switching to the Task class for your threading. A third method involves using the Parallel.ForEach() method, and that might actually suit your specific scenario better.
CountDownEvent looks like this:
CountDownEvent countDown = new CountDownEvent(250);
startfunction()
{
countDown.Reset();
start1step(250);
countDown.Wait();
}
private void start1step(int i)
{
while (i-- > 0)
{
new Thread(WorkThreadFunction, i).Start();
}
}
public void WorkThreadFunction(object o)
{
int x = (int)o;
try
{
// do some background work
}
catch
{
//
}
finally
{
countDown.Signal();
}
}
Task looks like this:
startfunction()
{
Task[] tasks = new Task[250];
start1step(tasks);
Task.WaitAll(tasks);
}
private void start1step(Task[] tasks)
{
for (int i = 0; i < tasks.Length; i++)
{
int taskParam = i;
tasks[i] = Task.Run(() => WorkThreadFunction(taskParam));
}
}
public void WorkThreadFunction(int x)
{
try
{
// do some background work
}
catch
{
//
}
}
Parallel.ForEach() looks like this:
startfunction()
{
Parallel.ForEach(Enumerable.Range(0, 250), i => WorkThreadFunction(i));
}
public void WorkThreadFunction(int x)
{
try
{
// do some background work
}
catch
{
//
}
}
In your main thread you can do something like this...
List<Thread> threads = new List<Thread>();
// CREATE THREADS and add them to the list of threads
while (threads.Any(x => x.IsAlive))
{
Thread.Sleep(500);
}
Console.WriteLine("done");
Essentially, keep track of the threads and check their status every now and then in a way that doesn't block the main thread.
Related
I'm looking for a fast way to let many worker threads wait for an event to continue and block the main thread until all worker threads are finished. I first used TPL or AutoResetEvent but since my calculation isn't that expensive the overhead was way too much.
I found a pretty interesting article concerning this problem and got great results (using only one worker thread) with the last synchronization solution (Interlocked.CompareExchange). But I don't know how to utilize it for a scenario where many threads wait for one main tread repeatedly.
Here is an example using single thread, CompareExchange, and Barrier:
static void Main(string[] args)
{
int cnt = 1000000;
var stopwatch = new Stopwatch();
stopwatch.Start();
for (int i = 0; i < cnt; i++) { }
Console.WriteLine($"Single thread: {stopwatch.Elapsed.TotalSeconds}s");
var run = true;
Task task;
stopwatch.Restart();
int interlock = 0;
task = Task.Run(() =>
{
while (run)
{
while (Interlocked.CompareExchange(ref interlock, 0, 1) != 1) { Thread.Sleep(0); }
interlock = 2;
}
Console.WriteLine($"CompareExchange synced: {stopwatch.Elapsed.TotalSeconds}s");
});
for (int i = 0; i < cnt; i++)
{
interlock = 1;
while (Interlocked.CompareExchange(ref interlock, 0, 2) != 2) { Thread.Sleep(0); }
}
run = false;
interlock = 1;
task.Wait();
run = true;
var barrier = new Barrier(2);
stopwatch.Restart();
task = Task.Run(() =>
{
while (run) { barrier.SignalAndWait(); }
Console.WriteLine($"Barrier synced: {stopwatch.Elapsed.TotalSeconds}s");
});
for (int i = 0; i < cnt; i++) { barrier.SignalAndWait(); }
Thread.Sleep(0);
run = false;
if (barrier.ParticipantsRemaining == 1) { barrier.SignalAndWait(); }
task.Wait();
Console.ReadKey();
}
Average results (in seconds) are:
Single thread: 0,002
CompareExchange: 0,4
Barrier: 1,7
As you can see Barriers' overhead seems to be arround 4 times higher! If someone can rebuild me the CompareExchange-scenario to work with multiple worker threads this would surely help, too!
Sure, 1 second overhead for a million calculations is pretty less! Actually it just interests me.
Edit:
System.Threading.Barrier seems to be the fastest solution for this scenario. For saving a double blocking (all workers ready for work, all workes finished) I used the following code for the best results:
while(work)
{
while (barrier.ParticipantsRemaining > 1) { Thread.Sleep(0); }
//Set work package
barrier.SignalAndWait()
}
It seems like you might want to use a Barrier to synchronise a number of workers with a main thread.
Here's a compilable example. Have a play with it, paying attention to when the output tells you that you can "Press <Return> to signal the workers to start".
using System;
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
namespace Demo
{
static class Program
{
static void Main()
{
print("Main thread is starting the workers.");
int numWorkers = 10;
var barrier = new Barrier(numWorkers + 1); // Workers + main (controlling) thread.
for (int i = 0; i < numWorkers; ++i)
{
int n = i; // Prevent modified closure.
Task.Run(() => worker(barrier, n));
}
while (true)
{
print("***************** Press <RETURN> to signal the workers to start");
Console.ReadLine();
print("Main thread is signalling all the workers to start.");
// This will wait for all the workers to issue their call to
// barrier.SignalAndWait() before it returns:
barrier.SignalAndWait();
// At this point, all workers AND the main thread are at the same point.
}
}
static void worker(Barrier barrier, int workerNumber)
{
int iter = 0;
while (true)
{
print($"Worker {workerNumber} on iteration {iter} is waiting for barrier.");
// This will wait for all the other workers AND the main thread
// to issue their call to barrier.SignalAndWait() before it returns:
barrier.SignalAndWait();
// At this point, all workers AND the main thread are at the same point.
int delay = randomDelayMilliseconds();
print($"Worker {workerNumber} got barrier, now sleeping for {delay}");
Thread.Sleep(delay);
print($"Worker {workerNumber} finished work for iteration {iter}.");
}
}
static void print(string message)
{
Console.WriteLine($"[{sw.ElapsedMilliseconds:00000}] {message}");
}
static int randomDelayMilliseconds()
{
lock (rng)
{
return rng.Next(10000) + 5000;
}
}
static Random rng = new Random();
static Stopwatch sw = Stopwatch.StartNew();
}
}
This class uses lock and Interlocked.
Both increaseCount.with_lock.Run(); and increaseCount.with_interlock.Run(); prints between 96-100.
I am expecting both of them to print always 100. What did I make mistake?
public static class increaseCount {
public static int counter = 0;
public static readonly object myLock = new object();
public static class with_lock {
public static void Run() {
List<Thread> pool = new List<Thread>();
for(int i = 0; i < 100; i++) {
pool.Add(new Thread(f));
}
Parallel.ForEach(pool, x => x.Start());
Console.WriteLine(counter); //should print 100
}
static void f() {
lock(myLock) {
counter++;
}
}
}
public static class with_interlock {
public static void Run() {
List<Thread> pool = new List<Thread>();
for(int i = 0; i < 100; i++) {
pool.Add(new Thread(f));
}
Parallel.ForEach(pool, x => x.Start());
Console.WriteLine(counter);//should print 100
}
static void f() {
Interlocked.Add(ref counter, 1);
}
}
}
In both cases, you start up your threads but you don't wait for them to complete so you don't reach the 100 before you print the result and the app closes.
If after you start all thread you would wait for all these threads to complete with Thread.Join you would always get the correct result:
List<Thread> pool = new List<Thread>();
for (int i = 0; i < 100; i++)
{
pool.Add(new Thread(f));
}
Parallel.ForEach(pool, x => x.Start());
foreach (var thread in pool)
{
thread.Join();
}
Console.WriteLine(counter);
Note: This seems like a test of some kind, but you should know that blocking multiple threads on a single lock is a huge waste of resources.
I believe it's because your Parallel.Foreach call simply calls start on all the threads in pool but they haven't necessarily completed by the time the loops finished and the Console.WriteLine is called. If you were to insert a Thread.Sleep(5000); // 5s sleep or similar before the Console.WriteLine it would likely always print out what you expect.
Your code is fine. The only problem is your expectation. Basically, not all 100 threads get to run untill the counter is displayed. Try putting a Thread.Sleep(1000) before the Console.WriteLine(counter) and you shall see what I mean.
Edit: wrongly posted as a comment the first time.
So according to MSDN, and many other places I've read, they use a semaphore and block within the individual threads, like so:
private static Semaphore _pool;
public static void Main()
{
_pool = new Semaphore(0, 3);
for(int i = 1; i <= 1000; i++)
{
Thread t = new Thread(new ParameterizedThreadStart(Worker));
t.Start(i);
}
}
private static void Worker(object num)
{
try
{
_pool.WaitOne();
// do a long process here
}
finally
{
_pool.Release();
}
}
Wouldn't it make more sense to block the process so that you don't create potentially 1000s of threads all at once depending on the number of iterations in Main()? For example:
private static Semaphore _pool;
public static void Main()
{
_pool = new Semaphore(0, 3);
for(int i = 1; i <= 1000; i++)
{
_pool.WaitOne(); // wait for semaphore release here
Thread t = new Thread(new ParameterizedThreadStart(Worker));
t.Start(i);
}
}
private static void Worker(object num)
{
try
{
// do a long process here
}
finally
{
_pool.Release();
}
}
Maybe both ways are not wrong and it depends on the situation? Or there is a better way to do this once there are a lot of iterations?
Edit: This is a windows service, so I'm not blocking the UI thread.
The reason you would normally do it inside the thread is you want to make that exclusive section as small as possible. You don't need the entire thread synchronized, only where that thread accesses the shared resource.
So a more realistic version of Worker is
private static void Worker(object num)
{
//Do a bunch of work that can happen in parallel
try
{
_pool.WaitOne();
// do a small amount of work that can only happen in 3 threads at once
}
finally
{
_pool.Release();
}
//Do a bunch more work that can happen in parallel
}
(P.S. If you are doing something that uses 1000 threads, you are doing something wrong. You should likely rather be using a ThreadPool or Tasks for many short-lived workloads or make each thread do more work.)
Here is how to do it with Parallel.ForEach
private static BlockingCollection<int> _pool;
public static void Main()
{
_pool = new BlockingCollection<int>();
Task.Run(() => //This is run in another thread so it shows data is being taken out and put in at the same time
{
for(int i = 1; i <= 1000; i++)
{
_pool.Add(i);
}
_pool.CompleteAdding(); //Lets the foreach know no new items will be showing up.
});
//This will work on the items in _pool, if there is no items in the collection it will block till CompleteAdding() is called.
Parallel.ForEach(_pool.GetConsumingEnumerable(), new ParallelOptions {MaxDegreeOfParallelism = 3}, Worker);
}
private static void Worker(int num)
{
// do a long process here
}
I have code, that create 5 threads. I need wait, until all threads finished their work, and after return value. How can I do this?
public static int num=-1;
public int GetValue()
{
Thread t=null;
for (int i = 0; i <=5; i++)
{
t = new Thread(() => PasswdThread(i));
t.Start();
}
//how wait all thread, and than return value?
return num;
}
public void PasswdThread(int i)
{
Thread.Sleep(1000);
Random r=new Random();
int n=r.Next(10);
if (n==5)
{
num=r.Next(1000);
}
}
Of course this is not a real code. The actual code is much more complicated, so I simplified it.
P.S. Look carefully. I am not use Task, so I can't use method Wait() or WaitAll(). Also I can't use Join(), because Join wait one thread. If they start wait thread, which already finished they work, the will wait infinity.
Make an array of thread like below and call WaitAll function
List<Thread> threads = new List<Thread>();
Thread thread = null;
for (int i = 0; i <=5; i++)
{
t = new Thread(() => PasswdThread(i));
t.Start();
threads.add(t);
}
Thread.WaitAll(thread);
//how wait all thread, and than return value?
return num;
create a ManualResetEvent handle for each your thread, and then call WaitHandle.WaitAll(handles) in your main thread.
static WaitHandle[] handles = new WaitHandle[5];
`
public void PasswdThread(int i)
{
handles[i] = new ManualResetEvent(false);
Thread.Sleep(1000);
Random r=new Random();
int n=r.Next(10);
if (n==5)
{
num=r.Next(1000);
}
handles[i].Set();
}
Get more information on http://msdn.microsoft.com/en-us/library/z6w25xa6.aspx
I think you can use Thread.WaitAll(thread_array) or in other case you can also use Thread.Sleep(100)
In Thread.sleep, 100 is number of milliseconds. So in this case thread would sleep for 100 milliseconds.
And in Thread.WaitAll - thread_Array is array of threads that you wanna wait.
As this question is effectively a duplicate, please see this answer, (code copied below, all credit to Reed Copsey.
class Program
{
static void Main(string[] args)
{
int numThreads = 10;
ManualResetEvent resetEvent = new ManualResetEvent(false);
int toProcess = numThreads;
// Start workers.
for (int i = 0; i < numThreads; i++)
{
new Thread(delegate()
{
Console.WriteLine(Thread.CurrentThread.ManagedThreadId);
// If we're the last thread, signal
if (Interlocked.Decrement(ref toProcess) == 0)
resetEvent.Set();
}).Start();
}
// Wait for workers.
resetEvent.WaitOne();
Console.WriteLine("Finished.");
}
}
Aside
Also note that your PasswdThread code will not produce random numbers. The Random object should be declared statically, outside of your method, to produce random numbers.
Additionally you never use the int i parameter of that method.
I would use TPL for this, imo it's the most up to date technique for handling this sort of synchronization. Given the real life code is probably more complex, I'll rework the example slightly:
public int GetValue()
{
List<Task<int>> tasks = new List<Task<int>>();
for (int i = 0; i <=5; i++)
{
tasks.Add(PasswdThread(i));
}
Task.WaitAll(tasks);
// You can now query all the tasks:
foreach (int result in tasks.Select(t => t.Result))
{
if (result == 100) // Do something to pick the desired result...
{
return result;
}
}
return -1;
}
public Task<int> PasswdThread(int i)
{
return Task.Factory.StartNew(() => {
Thread.Sleep(1000);
Random r=new Random();
int n=r.Next(10);
if (n==5)
{
return r.Next(1000);
}
return 0;
});
}
Thread t=null;
List<Thread> lst = new List<Thread();
for (int i = 0; i <=5; i++)
{
t = new Thread(() => PasswdThread(i));
lst.Add(t);
t.Start();
}
//how wait all thread, and than return value?
foreach(var item in lst)
{
while(item.IsAlive)
{
Thread.Sleep(5);
}
}
return num;
I recently started on a new job and there is a windows service here that consumes messages from a private windows queue. This service consumes the messages only from 9am to 6pm. So, during 7pm to 8:59am it accumulates a lot of messages on the queue. When it starts processing at 9pm, the cpu usage of the service goes to high(98, 99 percent), screwing with the server's performance.
This service use threads to process the messages of the queue, but as I had never worked with threads before I am a little lost.
Here's the part of code that I am sure this is happening:
private Thread[] th;
//in the constructor of the class, the variable th is initialized like this:
this.th = new Thread[4];
//the interval of this method calling is 1sec, it only goes high cpu usage when there is a lot of messages in the queue
public void Exec()
{
try
{
AutoResetEvent autoEvent = new AutoResetEvent(false);
int vQtd = queue.GetAllMessages().Length;
while (vQtd > 0)
{
for (int y = 0; y < th.Length; y++)
{
if (this.th[y] == null || !this.th[y].IsAlive)
{
this.th[y] = new Thread(new ParameterizedThreadStart(ProcessMessage));
this.th[y].Name = string.Format("Thread_{0}", y);
this.th[y].Start(new Controller(queue.Receive(), autoEvent));
vQtd--;
}
}
}
}
catch (Exception ex)
{
ExceptionPolicy.HandleException(ex, "RECOVERABLE");
}
}
EDIT: I am trying the second approach posted by Brian Gideon. But I'll by honest: I'm deeply confused with the code and I don't have a clue about what it's doing.
I haven't changed the way the 4 threads are created and the other code I showed, just changed my Exec(exec is the method called every second when it's 9am to 6pm) method to this:
public void Exec()
{
try
{
AutoResetEvent autoEvent = new AutoResetEvent(false);
int vQtd = queue.GetAllMessages().Length;
while (vQtd > 0)
{
for (int i = 0; i < 4; i++)
{
var thread = new Thread(
(ProcessMessage) =>
{
while (true)
{
Message message = queue.Receive();
Controller controller = new Controller(message, autoEvent);
//what am I supposed to do with the controller?
}
});
thread.IsBackground = true;
thread.Start();
}
vQtd--;
}
}
catch (Exception ex)
{
ExceptionPolicy.HandleException(ex, "RECOVERABLE");
}
}
Ouch. I have to be honest. That is not a very good design. It could very well be spinning around that while loop waiting for previous threads to finish processing. Here is a much better way of doing it. Notice that the 4 threads are only created once and hang around forever. The code below uses the BlockingCollection from the .NET 4.0 BCL. If you are using an earlier version you can replace it with Stephen Toub's BlockingQueue.
Note: Further refactoring may be warranted in your case. This code tries to preserve some common elements from the original.
public class Example
{
private BlockingCollection<Controller> m_Queue = new BlockingCollection<Controller>();
public Example()
{
for (int i = 0; i < 4; i++)
{
var thread = new Thread(
() =>
{
while (true)
{
Controller controller = m_Queue.Take();
// Do whatever you need to with Contoller here.
}
});
thread.IsBackground = true;
thread.Start();
}
}
public void Exec()
{
try
{
AutoResetEvent autoEvent = new AutoResetEvent(false);
int vQtd = Queue.GetAllMessages().Length
while (vQtd > 0)
{
m_Queue.Add(new Controller(Queue.Receive(), autoEvent));
}
}
catch (Exception ex)
{
ExceptionPolicy.HandleException(ex, "RECOVERABLE");
}
}
}
Edit:
Or better yet since MessageQueue is thread-safe:
public class Example
{
public Example()
{
for (int i = 0; i < 4; i++)
{
var thread = new Thread(
() =>
{
while (true)
{
if (/* between 9am and 6pm */)
{
Message message = queue.Receive();
Controller controller = new Controller(message, /* AutoResetEvent? */);
// Do whatever you need to with Contoller here.
// Is the AutoResetEvent really needed?
}
}
});
thread.IsBackground = true;
thread.Start();
}
}
}
The method you show runs in a tight loop when all threads are busy. Try something like this:
while (vQtd > 0)
{
bool full = true;
for (int y = 0; y < th.Length; y++)
{
if (this.th[y] == null || !this.th[y].IsAlive)
{
this.th[y] = new Thread(new ParameterizedThreadStart(ProcessMessage));
this.th[y].Name = string.Format("Thread_{0}", y);
this.th[y].Start(new Controller(queue.Receive(), autoEvent));
vQtd--;
full = false;
}
}
if (full)
{
Thread.Sleep(500); // Or whatever it may take for a thread to become free.
}
}
You have two options. Either you insert delays after each message with Thread.Sleep() or lower the thread priority of the polling threads. If you lower the thread priority the CPU usage will still be high, but should not affect performance that much.
Edit: or you can lower the number of threads from 4 to 3 to leave one core for other processing (assuming you have a quad core). This of course reduces your dequeuing throughput.
Edit2: or you could rewrite the whole think with task parallel library if you are running .NET 4. Look for Parallel.ForEach(). That should save you from some of the footwork if you are not familiar with threads.