I recently started on a new job and there is a windows service here that consumes messages from a private windows queue. This service consumes the messages only from 9am to 6pm. So, during 7pm to 8:59am it accumulates a lot of messages on the queue. When it starts processing at 9pm, the cpu usage of the service goes to high(98, 99 percent), screwing with the server's performance.
This service use threads to process the messages of the queue, but as I had never worked with threads before I am a little lost.
Here's the part of code that I am sure this is happening:
private Thread[] th;
//in the constructor of the class, the variable th is initialized like this:
this.th = new Thread[4];
//the interval of this method calling is 1sec, it only goes high cpu usage when there is a lot of messages in the queue
public void Exec()
{
try
{
AutoResetEvent autoEvent = new AutoResetEvent(false);
int vQtd = queue.GetAllMessages().Length;
while (vQtd > 0)
{
for (int y = 0; y < th.Length; y++)
{
if (this.th[y] == null || !this.th[y].IsAlive)
{
this.th[y] = new Thread(new ParameterizedThreadStart(ProcessMessage));
this.th[y].Name = string.Format("Thread_{0}", y);
this.th[y].Start(new Controller(queue.Receive(), autoEvent));
vQtd--;
}
}
}
}
catch (Exception ex)
{
ExceptionPolicy.HandleException(ex, "RECOVERABLE");
}
}
EDIT: I am trying the second approach posted by Brian Gideon. But I'll by honest: I'm deeply confused with the code and I don't have a clue about what it's doing.
I haven't changed the way the 4 threads are created and the other code I showed, just changed my Exec(exec is the method called every second when it's 9am to 6pm) method to this:
public void Exec()
{
try
{
AutoResetEvent autoEvent = new AutoResetEvent(false);
int vQtd = queue.GetAllMessages().Length;
while (vQtd > 0)
{
for (int i = 0; i < 4; i++)
{
var thread = new Thread(
(ProcessMessage) =>
{
while (true)
{
Message message = queue.Receive();
Controller controller = new Controller(message, autoEvent);
//what am I supposed to do with the controller?
}
});
thread.IsBackground = true;
thread.Start();
}
vQtd--;
}
}
catch (Exception ex)
{
ExceptionPolicy.HandleException(ex, "RECOVERABLE");
}
}
Ouch. I have to be honest. That is not a very good design. It could very well be spinning around that while loop waiting for previous threads to finish processing. Here is a much better way of doing it. Notice that the 4 threads are only created once and hang around forever. The code below uses the BlockingCollection from the .NET 4.0 BCL. If you are using an earlier version you can replace it with Stephen Toub's BlockingQueue.
Note: Further refactoring may be warranted in your case. This code tries to preserve some common elements from the original.
public class Example
{
private BlockingCollection<Controller> m_Queue = new BlockingCollection<Controller>();
public Example()
{
for (int i = 0; i < 4; i++)
{
var thread = new Thread(
() =>
{
while (true)
{
Controller controller = m_Queue.Take();
// Do whatever you need to with Contoller here.
}
});
thread.IsBackground = true;
thread.Start();
}
}
public void Exec()
{
try
{
AutoResetEvent autoEvent = new AutoResetEvent(false);
int vQtd = Queue.GetAllMessages().Length
while (vQtd > 0)
{
m_Queue.Add(new Controller(Queue.Receive(), autoEvent));
}
}
catch (Exception ex)
{
ExceptionPolicy.HandleException(ex, "RECOVERABLE");
}
}
}
Edit:
Or better yet since MessageQueue is thread-safe:
public class Example
{
public Example()
{
for (int i = 0; i < 4; i++)
{
var thread = new Thread(
() =>
{
while (true)
{
if (/* between 9am and 6pm */)
{
Message message = queue.Receive();
Controller controller = new Controller(message, /* AutoResetEvent? */);
// Do whatever you need to with Contoller here.
// Is the AutoResetEvent really needed?
}
}
});
thread.IsBackground = true;
thread.Start();
}
}
}
The method you show runs in a tight loop when all threads are busy. Try something like this:
while (vQtd > 0)
{
bool full = true;
for (int y = 0; y < th.Length; y++)
{
if (this.th[y] == null || !this.th[y].IsAlive)
{
this.th[y] = new Thread(new ParameterizedThreadStart(ProcessMessage));
this.th[y].Name = string.Format("Thread_{0}", y);
this.th[y].Start(new Controller(queue.Receive(), autoEvent));
vQtd--;
full = false;
}
}
if (full)
{
Thread.Sleep(500); // Or whatever it may take for a thread to become free.
}
}
You have two options. Either you insert delays after each message with Thread.Sleep() or lower the thread priority of the polling threads. If you lower the thread priority the CPU usage will still be high, but should not affect performance that much.
Edit: or you can lower the number of threads from 4 to 3 to leave one core for other processing (assuming you have a quad core). This of course reduces your dequeuing throughput.
Edit2: or you could rewrite the whole think with task parallel library if you are running .NET 4. Look for Parallel.ForEach(). That should save you from some of the footwork if you are not familiar with threads.
Related
I am new to threading concept, using threading first time in my application. One of my application processing multiple data, without threading it is taking approx 2 minutes while with help of threading it is taking just 25 seconds, but i want notification when all threads finishes work.
private int z = 0 ;
startfunction()
{
z= 250;
start1step(z);
}
private void start1step(int i)
{
if (i < 0)
return;
else
{
Thread thread = new Thread(new ThreadStart(WorkThreadFunction));
thread.Start();
start1step( --i);
}
}
public void WorkThreadFunction( )
{
try
{
int x = z ;
z-- ;
// do some background work
if(x ==0)
MessageBox.Show("All Thread finished");
}
catch
{
//
}
}
Above sample code is working perfectly except that notification part, I want notification when all threads finishes background work. There is one last step left which sums up work finished by all these threads.
Please help
There are lots of ways to do this. I would say the two most convenient methods that remain similar to your current implementation involve using a CountDownEvent or switching to the Task class for your threading. A third method involves using the Parallel.ForEach() method, and that might actually suit your specific scenario better.
CountDownEvent looks like this:
CountDownEvent countDown = new CountDownEvent(250);
startfunction()
{
countDown.Reset();
start1step(250);
countDown.Wait();
}
private void start1step(int i)
{
while (i-- > 0)
{
new Thread(WorkThreadFunction, i).Start();
}
}
public void WorkThreadFunction(object o)
{
int x = (int)o;
try
{
// do some background work
}
catch
{
//
}
finally
{
countDown.Signal();
}
}
Task looks like this:
startfunction()
{
Task[] tasks = new Task[250];
start1step(tasks);
Task.WaitAll(tasks);
}
private void start1step(Task[] tasks)
{
for (int i = 0; i < tasks.Length; i++)
{
int taskParam = i;
tasks[i] = Task.Run(() => WorkThreadFunction(taskParam));
}
}
public void WorkThreadFunction(int x)
{
try
{
// do some background work
}
catch
{
//
}
}
Parallel.ForEach() looks like this:
startfunction()
{
Parallel.ForEach(Enumerable.Range(0, 250), i => WorkThreadFunction(i));
}
public void WorkThreadFunction(int x)
{
try
{
// do some background work
}
catch
{
//
}
}
In your main thread you can do something like this...
List<Thread> threads = new List<Thread>();
// CREATE THREADS and add them to the list of threads
while (threads.Any(x => x.IsAlive))
{
Thread.Sleep(500);
}
Console.WriteLine("done");
Essentially, keep track of the threads and check their status every now and then in a way that doesn't block the main thread.
So according to MSDN, and many other places I've read, they use a semaphore and block within the individual threads, like so:
private static Semaphore _pool;
public static void Main()
{
_pool = new Semaphore(0, 3);
for(int i = 1; i <= 1000; i++)
{
Thread t = new Thread(new ParameterizedThreadStart(Worker));
t.Start(i);
}
}
private static void Worker(object num)
{
try
{
_pool.WaitOne();
// do a long process here
}
finally
{
_pool.Release();
}
}
Wouldn't it make more sense to block the process so that you don't create potentially 1000s of threads all at once depending on the number of iterations in Main()? For example:
private static Semaphore _pool;
public static void Main()
{
_pool = new Semaphore(0, 3);
for(int i = 1; i <= 1000; i++)
{
_pool.WaitOne(); // wait for semaphore release here
Thread t = new Thread(new ParameterizedThreadStart(Worker));
t.Start(i);
}
}
private static void Worker(object num)
{
try
{
// do a long process here
}
finally
{
_pool.Release();
}
}
Maybe both ways are not wrong and it depends on the situation? Or there is a better way to do this once there are a lot of iterations?
Edit: This is a windows service, so I'm not blocking the UI thread.
The reason you would normally do it inside the thread is you want to make that exclusive section as small as possible. You don't need the entire thread synchronized, only where that thread accesses the shared resource.
So a more realistic version of Worker is
private static void Worker(object num)
{
//Do a bunch of work that can happen in parallel
try
{
_pool.WaitOne();
// do a small amount of work that can only happen in 3 threads at once
}
finally
{
_pool.Release();
}
//Do a bunch more work that can happen in parallel
}
(P.S. If you are doing something that uses 1000 threads, you are doing something wrong. You should likely rather be using a ThreadPool or Tasks for many short-lived workloads or make each thread do more work.)
Here is how to do it with Parallel.ForEach
private static BlockingCollection<int> _pool;
public static void Main()
{
_pool = new BlockingCollection<int>();
Task.Run(() => //This is run in another thread so it shows data is being taken out and put in at the same time
{
for(int i = 1; i <= 1000; i++)
{
_pool.Add(i);
}
_pool.CompleteAdding(); //Lets the foreach know no new items will be showing up.
});
//This will work on the items in _pool, if there is no items in the collection it will block till CompleteAdding() is called.
Parallel.ForEach(_pool.GetConsumingEnumerable(), new ParallelOptions {MaxDegreeOfParallelism = 3}, Worker);
}
private static void Worker(int num)
{
// do a long process here
}
I have code, that create 5 threads. I need wait, until all threads finished their work, and after return value. How can I do this?
public static int num=-1;
public int GetValue()
{
Thread t=null;
for (int i = 0; i <=5; i++)
{
t = new Thread(() => PasswdThread(i));
t.Start();
}
//how wait all thread, and than return value?
return num;
}
public void PasswdThread(int i)
{
Thread.Sleep(1000);
Random r=new Random();
int n=r.Next(10);
if (n==5)
{
num=r.Next(1000);
}
}
Of course this is not a real code. The actual code is much more complicated, so I simplified it.
P.S. Look carefully. I am not use Task, so I can't use method Wait() or WaitAll(). Also I can't use Join(), because Join wait one thread. If they start wait thread, which already finished they work, the will wait infinity.
Make an array of thread like below and call WaitAll function
List<Thread> threads = new List<Thread>();
Thread thread = null;
for (int i = 0; i <=5; i++)
{
t = new Thread(() => PasswdThread(i));
t.Start();
threads.add(t);
}
Thread.WaitAll(thread);
//how wait all thread, and than return value?
return num;
create a ManualResetEvent handle for each your thread, and then call WaitHandle.WaitAll(handles) in your main thread.
static WaitHandle[] handles = new WaitHandle[5];
`
public void PasswdThread(int i)
{
handles[i] = new ManualResetEvent(false);
Thread.Sleep(1000);
Random r=new Random();
int n=r.Next(10);
if (n==5)
{
num=r.Next(1000);
}
handles[i].Set();
}
Get more information on http://msdn.microsoft.com/en-us/library/z6w25xa6.aspx
I think you can use Thread.WaitAll(thread_array) or in other case you can also use Thread.Sleep(100)
In Thread.sleep, 100 is number of milliseconds. So in this case thread would sleep for 100 milliseconds.
And in Thread.WaitAll - thread_Array is array of threads that you wanna wait.
As this question is effectively a duplicate, please see this answer, (code copied below, all credit to Reed Copsey.
class Program
{
static void Main(string[] args)
{
int numThreads = 10;
ManualResetEvent resetEvent = new ManualResetEvent(false);
int toProcess = numThreads;
// Start workers.
for (int i = 0; i < numThreads; i++)
{
new Thread(delegate()
{
Console.WriteLine(Thread.CurrentThread.ManagedThreadId);
// If we're the last thread, signal
if (Interlocked.Decrement(ref toProcess) == 0)
resetEvent.Set();
}).Start();
}
// Wait for workers.
resetEvent.WaitOne();
Console.WriteLine("Finished.");
}
}
Aside
Also note that your PasswdThread code will not produce random numbers. The Random object should be declared statically, outside of your method, to produce random numbers.
Additionally you never use the int i parameter of that method.
I would use TPL for this, imo it's the most up to date technique for handling this sort of synchronization. Given the real life code is probably more complex, I'll rework the example slightly:
public int GetValue()
{
List<Task<int>> tasks = new List<Task<int>>();
for (int i = 0; i <=5; i++)
{
tasks.Add(PasswdThread(i));
}
Task.WaitAll(tasks);
// You can now query all the tasks:
foreach (int result in tasks.Select(t => t.Result))
{
if (result == 100) // Do something to pick the desired result...
{
return result;
}
}
return -1;
}
public Task<int> PasswdThread(int i)
{
return Task.Factory.StartNew(() => {
Thread.Sleep(1000);
Random r=new Random();
int n=r.Next(10);
if (n==5)
{
return r.Next(1000);
}
return 0;
});
}
Thread t=null;
List<Thread> lst = new List<Thread();
for (int i = 0; i <=5; i++)
{
t = new Thread(() => PasswdThread(i));
lst.Add(t);
t.Start();
}
//how wait all thread, and than return value?
foreach(var item in lst)
{
while(item.IsAlive)
{
Thread.Sleep(5);
}
}
return num;
This is further to my question here
By doing some reading .... I moved away from Semaphores to ThreadPool.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace ThreadPoolTest
{
class Data
{
public int Pos { get; set; }
public int Num { get; set; }
}
class Program
{
static ManualResetEvent[] resetEvents = new ManualResetEvent[20];
static void Main(string[] args)
{
int s = 0;
for (int i = 0; i < 100000; i++)
{
resetEvents[s] = new ManualResetEvent(false);
Data d = new Data();
d.Pos = s;
d.Num = i;
ThreadPool.QueueUserWorkItem(new WaitCallback(Process), (object)d);
if (s >= 19)
{
WaitHandle.WaitAll(resetEvents);
Console.WriteLine("Press Enter to Move forward");
Console.ReadLine();
s = 0;
}
else
{
s = s + 1;
}
}
}
private static void Process(object o)
{
Data d = (Data) o;
Console.WriteLine(d.Num.ToString());
Thread.Sleep(10000);
resetEvents[d.Pos].Set();
}
}
}
This code works and I am able to process in the sets of 20. But I don't like this code because of WaitAll. So let's say I start a batch of 20, and 3 threads take longer time while 17 have finished. Even then I will keep the 17 threads as waiting because of the WaitAll.
WaitAny would have been good... but it seems rather messy that I will have to build so much of control structures like Stacks, Lists, Queues etc in order to use the pool efficiently.
The other thing I don't like is that whole global variable in the class for resetEvents. because this array has to be shared between the Process method and the main loop.
The above code works... but I need your help in improving it.
Again... I am on .NET 2.0 VS 2008. I cannot use .NET 4.0 parallel/async framework.
There are several ways you can do this. Probably the easiest, based on what you've posted above, would be:
const int MaxThreads = 4;
const int ItemsToProcess = 10000;
private Semaphore _sem = new Semaphore(MaxThreads, MaxThreads);
void DoTheWork()
{
int s = 0;
for (int i = 0; i < ItemsToProcess; ++i)
{
_sem.WaitOne();
Data d = new Data();
d.Pos = s;
d.Num = i;
ThreadPool.QueueUserWorkItem(Process, d);
++s;
if (s >= 19)
s = 0;
}
// All items have been assigned threads.
// Now, acquire the semaphore "MaxThreads" times.
// When counter reaches that number, we know all threads are done.
int semCount = 0;
while (semCount < MaxThreads)
{
_sem.WaitOne();
++semCount;
}
// All items are processed
// Clear the semaphore for next time.
_sem.Release(semCount);
}
void Process(object o)
{
// do the processing ...
// release the semaphore
_sem.Release();
}
I only used four threads in my example because that's how many cores I have. It makes little sense to be using 20 threads when only four of them can be processing at any one time. But you're free to increase the MaxThreads number if you like.
So I'm pretty sure this is all .NET 2.0.
We'll start out defining Action, because I'm so used to using it. If using this solution in 3.5+, remove that definition.
Next, we create a queue of actions based on the input.
After that we define a callback; this callback is the meat of the method.
It first grabs the next item in the queue (using a lock since the queue isn't thread safe). If it ended up having an item to grab it executes that item. Next it adds a new item to the thread pool which is "itself". This is a recursive anonymous method (you don't come across uses of that all that often). This means that when the callback is called for the first time it will execute one item, then schedule a task which will execute another item, and that item will schedule a task that executes another item, and so on. Eventually the queue will run out, and they'll stop queuing more items.
We also want the method to block until we're all done, so for that we keep track of how many of these callbacks have finished through incrementing a counter. When that counter reaches the task limit we signal the event.
Finally we start N of these callbacks in the thread pool.
public delegate void Action();
public static void Execute(IEnumerable<Action> actions, int maxConcurrentItems)
{
object key = new object();
Queue<Action> queue = new Queue<Action>(actions);
int count = 0;
AutoResetEvent whenDone = new AutoResetEvent(false);
WaitCallback callback = null;
callback = delegate
{
Action action = null;
lock (key)
{
if (queue.Count > 0)
action = queue.Dequeue();
}
if (action != null)
{
action();
ThreadPool.QueueUserWorkItem(callback);
}
else
{
if (Interlocked.Increment(ref count) == maxConcurrentItems)
whenDone.Set();
}
};
for (int i = 0; i < maxConcurrentItems; i++)
{
ThreadPool.QueueUserWorkItem(callback);
}
whenDone.WaitOne();
}
Here's another option that doesn't use the thread pool, and just uses a fixed number of threads:
public static void Execute(IEnumerable<Action> actions, int maxConcurrentItems)
{
Thread[] threads = new Thread[maxConcurrentItems];
object key = new object();
Queue<Action> queue = new Queue<Action>(actions);
for (int i = 0; i < maxConcurrentItems; i++)
{
threads[i] = new Thread(new ThreadStart(delegate
{
Action action = null;
do
{
lock (key)
{
if (queue.Count > 0)
action = queue.Dequeue();
else
action = null;
}
if (action != null)
{
action();
}
} while (action != null);
}));
threads[i].Start();
}
for (int i = 0; i < maxConcurrentItems; i++)
{
threads[i].Join();
}
}
I have several threads consuming tasks from a queue using something similar to the code below. The problem is that there is one type of task which cannot run while any other tasks are being processed.
Here is what I have:
while (true) // Threaded code
{
while (true)
{
lock(locker)
{
if (close_thread)
return;
task = GetNextTask(); // Get the next task from the queue
}
if (task != null)
break;
wh.WaitOne(); // Wait until a task is added to the queue
}
task.Run();
}
And this is kind of what I need:
while (true)
{
while (true)
{
lock(locker)
{
if (close_thread)
return;
if (disable_new_tasks)
{
task = null;
}
else
{
task = GetNextTask();
}
}
if (task != null)
break;
wh.WaitOne();
}
if(!task.IsThreadSafe())
{
// I would set this to false inside task.Run() at
// the end of the non-thread safe task
disable_new_tasks = true;
Wait_for_all_threads_to_finish_their_current_tasks();
}
task.Run();
}
The problem is I don't know how to achive this without creating a mess.
Try looking to using a TreadPool and then using the WaitHandle.WaitAll method to determine that all threads have finished executing.
MSDN
WaitHandle.WaitAll(autoEvents); Maybe this is what you want.
class Calculate
{
double baseNumber, firstTerm, secondTerm, thirdTerm;
AutoResetEvent[] autoEvents;
ManualResetEvent manualEvent;
// Generate random numbers to simulate the actual calculations.
Random randomGenerator;
public Calculate()
{
autoEvents = new AutoResetEvent[]
{
new AutoResetEvent(false),
new AutoResetEvent(false),
new AutoResetEvent(false)
};
manualEvent = new ManualResetEvent(false);
}
void CalculateBase(object stateInfo)
{
baseNumber = randomGenerator.NextDouble();
// Signal that baseNumber is ready.
manualEvent.Set();
}
// The following CalculateX methods all perform the same
// series of steps as commented in CalculateFirstTerm.
void CalculateFirstTerm(object stateInfo)
{
// Perform a precalculation.
double preCalc = randomGenerator.NextDouble();
// Wait for baseNumber to be calculated.
manualEvent.WaitOne();
// Calculate the first term from preCalc and baseNumber.
firstTerm = preCalc * baseNumber *
randomGenerator.NextDouble();
// Signal that the calculation is finished.
autoEvents[0].Set();
}
void CalculateSecondTerm(object stateInfo)
{
double preCalc = randomGenerator.NextDouble();
manualEvent.WaitOne();
secondTerm = preCalc * baseNumber *
randomGenerator.NextDouble();
autoEvents[1].Set();
}
void CalculateThirdTerm(object stateInfo)
{
double preCalc = randomGenerator.NextDouble();
manualEvent.WaitOne();
thirdTerm = preCalc * baseNumber *
randomGenerator.NextDouble();
autoEvents[2].Set();
}
public double Result(int seed)
{
randomGenerator = new Random(seed);
// Simultaneously calculate the terms.
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateBase));
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateFirstTerm));
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateSecondTerm));
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateThirdTerm));
// Wait for all of the terms to be calculated.
**WaitHandle.WaitAll(autoEvents);**
// Reset the wait handle for the next calculation.
manualEvent.Reset();
return firstTerm + secondTerm + thirdTerm;
}
}
You can think of this as similar to a data structure that allows any number of readers, or one writer. That is, any number of threads can read the data structure, but a thread that writes to the data structure needs exclusive access.
In your case, you can have any number of "regular" threads running, or you can have one thread that requires exclusive access.
.NET has the ReaderWriterLock and ReaderWriterLockSlim classes that you could use to implement this kind of sharing. Unfortunately, neither of those classes is available on the xbox.
However, it possible to implement a reader/writer lock from a combination of Monitor and ManualResetEvent objects. I don't have a C# example (why would I, since I have access to the native objects?), but there's a simple Win32 implementation that shouldn't be terribly difficult to port.
you can use something like this,
ExecutorService workers = Executors.newFixedThreadPool(10);
for(int i=0; i<input.length; i++) {
Teste task = new Teste(rowArray,max);//your thread class
workers.execute(task);
}
workers.shutdown();//ask for shut down
while(!workers.isTerminated()) {//wait until all finishes.
try {
Thread.sleep(100);//
} catch (InterruptedException exception) {
}
System.out.println("waiting for submitted task to finish operation");
}
Hope this help.