I don't know how to describe this problem precisely. Let's look at my code.
for (int i = 0; i < myMT.Keys[key_indexer].Count; i++)
{
threads.Add(new Thread(
() =>
{
sounds[myMT.Keys[key_indexer][i]].PlayLooping();
}
));
threads[threads.Count - 1].Start();
}
Note: sounds is a list of SoundPlayers
The initialization of threads and myMT:
List<Thread> threads = null;
MusicTransfer myMT=null;
and in the constructor:
threads = new List<Thread>();
myMT = new MusicTransfer(bubblePanel);
The variable Keys in myMT is with type of List<List<int>>. It is initialized with the same way of myMT and threads. Imagine a matrix, the outer list is a list of rows and the inner one is for each cell.
When I run the program, I set myMT.Keys[key_indexer].Count to 1. So, normally, the for loop should stop when i reach 1.
However, it throws an exception of ArgumentOutOfRange at the line of sounds[myMT.Keys[key_indexer][i]].PlayLooping(). So, I used debugger to check the value of each variable.
What I found are:
If I use "step over" check step by step, which means time is consumed quite much after the new thread runs, for loop will stop when i reaches 1, which is the way it should be.
If I click "continue" after the breakpoint triggered, the for loop is still processing after i equals 1.
the break point should always be set at the line of threads.Add(new Thread(. If it is set at the line of sounds[myMT.Keys[key_indexer][i]].PlayLooping();, the exception will be triggered even after "step over"
I guess the problem is about thread, but have no idea how to solve it.
Thanks for any help!
There is so many things wrong with your post, however maybe this will help you out a bit
Note : Make your code readable, trust me it does wonders
// List of threads
var threads = new List<Thread>();
// Lets stop indexing everything and make it easy for ourselves
var someList = myMT.Keys[key_indexer];
for (var i = 0; i < someList.Count; i++)
{
// we need to create a reference to the indexed value
// in the someList, otherwise there is no gaurentee
// the thread will have the right index when it needs it
// (thank me later)
var someSound = someList[i];
// create a thread and your callback
var thread = new Thread(() => someSound.PlayLooping());
// add thread to the list
threads.Add(thread);
}
// now lets start the treads in a nice orderly fashion
foreach (var thread in threads)
{
thread.Start();
}
Another way to do this with Tasks
var tasks = new List<Task>();
var someList = myMT.Keys[key_indexer];
for (var i = 0; i < someList.Count; i++)
{
var someSound = someList[1];
var task = new Task(() => someSound.PlayLooping());
tasks.Add(task);
task.Start();
}
Task.WaitAll(tasks.ToArray());
Disclaimer : i take no responsibility for your other logic problems, this was for pure morbid academic purposes
Related
I have such particular code:
for (int i = 0; i < SingleR_mustBeWorkedUp._number_of_Requestes; i++)
{
Random myRnd = new Random(SingleR_mustBeWorkedUp._num_path);
while (true)
{
int k = myRnd.Next(start, end);
if (CanRequestBePutted(timeLineR, k, SingleR_mustBeWorkedUp._time_service, start + end) == true)
{
SingleR_mustBeWorkedUp.placement[i] = k;
break;
}
}
}
I use an infinite loop here which will end only if CanRequestBePutted returns true. So how to know that the app isn't responding?
There is a solution by controlling time of working each loop, but it doesn't seem to be really good. And I can't forecast that is going to happen in every cases.
Any solutions?
If you're concerned that this operation could potentially take long enough for the application's user to notice, you should be running it in a non-UI thread. Then you can be sure that it will not be making your application unrepsonsive. You should only be running it in the UI thread if you're sure it will always complete very quickly. When in doubt, go to a non-UI thread.
Don't try to figure out dynamically whether the operation will take a long time or not. If it taking a while is a possibility, do the work in another thread.
Why not use a task or threadpool so you're not blocking and put a timer on it?
The task could look something like this:
//put a class level variable
static object _padlock = new object();
var tasks = new List<Task>();
for (int i = 0; i < SingleR_mustBeWorkedUp._number_of_Requestes; i++)
{
var task = new Task(() =>
{
Random myRnd = new Random(SingleR_mustBeWorkedUp._num_path);
while (true)
{
int k = myRnd.Next(start, end);
if (CanRequestBePutted(timeLineR, k, SingleR_mustBeWorkedUp._time_service, start + end) == true)
{
lock(_padlock)
SingleR_mustBeWorkedUp.placement[i] = k;
break;
}
}
});
task.Start();
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
However I would also try to figure out a way to take out your while(true), which is a bit dangerous. Also Task requires .NET 4.0 or above and i'm not sure what framework your targeting.
If you need something older you can use ThreadPool.
Also you might want to put locks around shared resources like SingleR_mustBeWorkedUp.placement or anywhere else might be changing a variable. I put one around SingleR_mustBeWorkedUp.placement as an example.
I have a while loop in which I create and start tasks, as follows:
while (!stopped)
{
List<Task> tasks = new List<Task>();
for (int i = 0; i < 10; i++)
tasks.add(Task.Factory.StartNew(() => DoSomething(i)));
Task.WaitAll(tasks.ToArray());
}
Would I get better performance if the tasks are created once before the while loop, and restarted everytime (because the data being passed to the function never changes)?
You can't restart task
http://msdn.microsoft.com/en-us/library/dd270682.aspx
By the way
Task.Factory.StartNew(() => {....})
is fast than
Task task = new Task(() => {...});
task.Start();
because no locking on Start method.
In your case use async io to get performance boost.
There is nothing fundamentally wrong with your code. This is a perfectly acceptable approach. You do not need to worry about the performance or expensive of creating tasks because the TPL was specifically designed to be used exactly like you have done.
However, there is one major problem with your code. You are closing over the loop variable. Remember, closures capture the variable and not the value. The way your code is written the DoSomething method will not be using the value of i that you think it should. Your code needs to be rewritten like this.
while (!stopped)
{
List tasks = new List();
for (int i = 0; i < 10; i++)
{
int capture = i;
tasks.add(Task.Factory.StartNew(() => DoSomething(capture)));
}
Task.WaitAll(tasks.ToArray());
}
As a side note you could use the Parallel.For method as an alternative. It is definitely a lot more compact of a solution if nothing else.
while (!stopped)
{
Parallel.For(0, 10, i => DoSomething(i));
}
It seems that if a given thread fails for any reason, this will cause an infinite loop.
This is code that isn't written by me so I can't even edit it, but I think the most obvious problem here is that the counter variable totalActions isn't marked as volatile, and as a result the threads are not seeing the most up-to-date value.
So it looks like if it never gets the real value of totalActions, it will keep waiting?
Will this cause the thread to run recursively then? While debugging, I notice that the executing thread fails (exception is thrown), and it just keeps getting called over and over and over....
public void PerformActions(List<Action> actions)
{
object actionLock = new object();
int totalActions = actionts.Count;
for(int x = 0; x < accounts.Count; x++)
{
int y = x;
new Thread(delegate()
{
actions[y].Invoke();
if(Interlocked.Decrement(ref totalActions) == 0)
{
lock(actionLock)
{
Monitor.Pulse(actionLock);
}
}
}).Start();
}
lock(actionLock)
{
if(totalActions > 0)
{
Monitor.Wait(actionLock);
}
}
}
Update
Usage is like this, where the myService is making httpRequest calls to grab json requests from an API service.
Execute.InParallel(
new Action[]
{
() => { abc = myService.DoSomething(); },
() => { def = myService.DoSomethingElse(); }
});
The lock will act as a memory barrier, ensuring that your test if(totalActions > 0) reads the current value. I'm not convinced that this code is race-free but the race would at least be very, very unlikely. You'd have a hard time reproducing it.
So the problem is something else not shown here. Can you use the debugger to find out what exactly the threads involved are doing?
You say some threads die due to an unhandled exception. Maybe the threads exiting early causes the count not to decrement.
Also, if you can't change the code, what is the point of the question? I'm not sure what to suggest to you.
The loop is incorrect - in that variable x is captured incorrectly. it will always have the last value of x when actions[x].Invoke(); is executed in each thread. So, the last delegate passed to array will be called multiple times.
Correct way to do it is like this
for(int x = 0; x < accounts.Count; x++)
{
int y = x; // here correct value of y will be captured in delegate
new Thread(delegate()
{
actions[y].Invoke();
...
I have a very simple program counting the characters in a string. An integer threadnum sets the number of threads and divides the data by threadnum accordingly into chunks for each thread to process.
Each thread increments the values contained in a shared dictionary, building a character historgram.
private Dictionary<UInt32, int> dict = new Dictionary<UInt32, int>();
In order to wait for all threads to finish and continue with the main process, I invoke Thread.Join
Initially I had a local dictionary for each thread which get merged afterwards, but a shared dictionary worked fine, without locking.
No references are locked in the method BuildDictionary, though locking the dictionary did not significantly impact thread-execution time.
Each thread is timed, and the resulting dictionary compared.
The dictionary content is the same regardless of a single or multiple threads - as it should be.
Each thread takes a fraction determined by threadnum to complete - as it should be.
Problem:
The total time is roughly a multiple of threadnum , that is to say the execution time increases ?
(Unfortunately I cannot run a C# Profiler at the moment. Additionally I would prefer C# 3 code compatibility. )
Others are likely struggling as well. It may be that the VS 2010 express edition vshost process stacks and schedules threads to be run sequentially?
Another MT-performance issue was posted recently posted here as "Visual Studio C# 2010 Express Debug running Faster than Release":
Code:
public int threadnum = 8;
Thread[] threads = new Thread[threadnum];
Stopwatch stpwtch = new Stopwatch();
stpwtch.Start();
for (var threadidx = 0; threadidx < threadnum; threadidx++)
{
threads[threadidx] = new Thread(BuildDictionary);
threads[threadidx].Start(threadidx);
threads[threadidx].Join(); //Blocks the calling thread, till thread completion
}
WriteLine("Total - time: {0} msec", stpwtch.ElapsedMilliseconds);
Can you help please?
Update:
It appears that the strange behavior of an almost linear slowdown with increasing thread-number is an artifact due to the numerous hooks of the IDE's Debugger.
Running the process outside the developer environment, I actually do get a 30% speed increase on a 2 logical/physical core machine. During debugging I am already at the high end of CPU utilization, and hence I suspect it is wise to have some leeway during development through additional idle cores.
As initially, I let each thread compute on its own local data-chunk, which is locked and written back to a shared list and aggregated after all threads have finished.
Conclusion:
Be heedful of the environment the process is running in.
We can put the dictionary synchronization issues Tony the Lion mentions in his answer aside for the moment, because in your current implementation you are in fact not running anything in parallel!
Let's take a look at what you are currently doing in your loop:
Start a thread.
Wait for the thread to complete.
Start the next thread.
In other words, you should not be calling Join inside the loop.
Instead, you should start all threads as you are doing, but use a singaling construct such as an AutoResetEvent to determine when all threads have completed.
See example program:
class Program
{
static EventWaitHandle _waitHandle = new AutoResetEvent(false);
static void Main(string[] args)
{
int numThreads = 5;
for (int i = 0; i < numThreads; i++)
{
new Thread(DoWork).Start(i);
}
for (int i = 0; i < numThreads; i++)
{
_waitHandle.WaitOne();
}
Console.WriteLine("All threads finished");
}
static void DoWork(object id)
{
Thread.Sleep(1000);
Console.WriteLine(String.Format("Thread {0} completed", (int)id));
_waitHandle.Set();
}
}
Alternatively you could just as well be calling Join in the second loop if you have references to the threads available.
After you have done this you can and should worry about the dictionary synchronization problems.
A Dictionary can support multiple readers concurrently, as long as the collection is not modified. From MSDN
You say:
but a shared dictionary worked fine, without locking.
Each thread increments the values contained in a shared dictionary
Your program is by definition broken, if you alter the data in the dictionary without proper locking, you will end up with bugs. Nothing more needs to be said.
I wouldn't use some shared static Dictionary, if each thread worked on a local copy you could amalgamate your results once all threads had signalled completion.
WaitHandle.WaitAll avoids any deadlocking on an AutoResetEvent.
class Program
{
static void Main()
{
char[] text = "Some String".ToCharArray();
int numThreads = 5;
// I leave the implementation of the next line to the OP.
Partition[] partitions = PartitionWork(text, numThreads);
completions = new WaitHandle[numThreads];
results = IDictionary<char, int>[numThreads];
for (int i = 0; i < numThreads; i++)
{
results[i] = new IDictionary<char, int>();
completions[i] = new ManualResetEvent(false);
new Thread(DoWork).Start(
text,
partitions[i].Start,
partitions[i].End,
results[i],
completions[i]);
}
if (WaitHandle.WaitAll(completions, new TimeSpan(366, 0, 0, 0))
{
Console.WriteLine("All threads finished");
}
else
{
Console.WriteLine("Timed out after a year and a day");
}
// Merge the results
IDictionary<char, int> result = results[0];
for (int i = 1; i < numThreads - 1; i ++)
{
foreach(KeyValuePair<char, int> item in results[i])
{
if (result.ContainsKey(item.Key)
{
result[item.Key] += item.Value;
}
else
{
result.Add(item.Key, item.Value);
}
}
}
}
static void BuildDictionary(
char[] text,
int start,
int finish,
IDictionary<char, int> result,
WaitHandle completed)
{
for (int i = start; i <= finish; i++)
{
if (result.ContainsKey(text[i])
{
result[text[i]]++;
}
else
{
result.Add(text[i], 1);
}
}
completed.Set();
}
}
With this implementation the only variable that is ever shared is the char[] of the text and that is always read only.
You do have the burden of merging the dictionaries at the end but, that is a small price for avoiding any concurrencey issues. In a later version of the framework I would have used TPL and ConcurrentDictionary and possibly Partitioner<TSource>.
I totally agree with TonyTheLion and others, and as you fix the actual problem with join'ing at the wrong place, there still will be problem with (no) locks and updating the shared dictionary. I wanted to drop you a quick workaround: just wrap your integer value into some object:
instead of:
Dictionary<uint, int> dict = new Dictionary<uint, int>();
use:
class Entry { public int value; }
Dictionary<uint, Entry> dict = new Dictionary<uint, Entry>();
and now increment the Entry::value instead. That way, the Dictionary will not notice any changes and it will be safe without locking the dictionary.
Note: this will however work only if you are guaranteed if one thread would use only its own one Entry. I've just noticed this is not true as you said 'histogram of characters'. You will have to lock over each Entry during the increment, or some increments may be lost. Still, locking at Entry layer will speed up signinificantly when compared to locking at whole dictionary
Roem saw it.
Your main thread should Join the X other Threads after having started all of them.
Else it waits for the 1st thread to be finished, to start and wait for the 2nd one.
for (var threadidx = 0; threadidx < threadnum; threadidx++)
{
threads[threadidx] = new Thread(BuildDictionary);
threads[threadidx].Start(threadidx);
}
for (var threadidx = 0; threadidx < threadnum; threadidx++)
{
threads[threadidx].Join(); //Blocks the calling thread, till thread completion
}
As Rotem points out, by joining in the loop you are waiting for each thread to complete before going continuing.
The hint for why this is can be found on the Thread.Join documentation on MSDN
Blocks the calling thread until a thread terminates
So you loop will not continue until that one thread has completed it's work. To start all the threads then wait for them to complete, join them outside the loop:
public int threadnum = 8;
Thread[] threads = new Thread[threadnum];
Stopwatch stpwtch = new Stopwatch();
stpwtch.Start();
// Start all the threads doing their work
for (var threadidx = 0; threadidx < threadnum; threadidx++)
{
threads[threadidx] = new Thread(BuildDictionary);
threads[threadidx].Start(threadidx);
}
// Join to all the threads to wait for them to complete
for (var threadidx = 0; threadidx < threadnum; threadidx++)
{
threads[threadidx].Join();
}
System.Diagnostics.Debug.WriteLine("Total - time: {0} msec", stpwtch.ElapsedMilliseconds);
You will really need to post your BuildDictionary function. It is very likely that the operation will be no faster with multiple threads and the threading overhead will actually increase execution time.
I've searched all morning and I can't seem to find the answer to this question.
I have an array of Threads each doing work and then I'll loop through the ids joining each one then starting new threads. What's the best way to detect when a thread has finish so I can fire off a new thread without waiting for each thread to finish?
EDIT added code snippet maybe this will help
if (threadCount > maxItems)
{
threadCount = maxItems;
}
threads = new Thread[threadCount];
for (int i = 0; i < threadCount; i++)
{
threads[i] = new Thread(delegate() { this.StartThread(); });
threads[i].Start();
}
while (loopCounter < threadCount)
{
if (loopCounter == (threadCount - 1))
{
loopCounter = 0;
}
if (threads[loopCounter].ThreadState == ThreadState.Stopped)
{
threads[loopCounter] = new Thread(delegate() { this.StartThread(); });
threads[loopCounter].Start();
}
}
Rather than creating new thread each time, why not just have each thread call a function that returns the next ID (or null if there's no more data to process) when it's finished with the current one? That function will obviously have to be threadsafe, but should reduce your overhead versus watching for finished threads and starting new ones.
so,
void RunWorkerThreads(int threadCount) {
for (int i = 0; i < threadCount; ++i) {
new Thread(() => {
while(true) {
var nextItem = GetNextItem();
if (nextItem == null) break;
/*do work*/
}
}).Start();
}
}
T GetNextItem() {
lock(_lockObject) {
//return the next item
}
}
I'd probably pull GetNextItem and "do work" out and pass them as a parameters to RunWorkerThreads to make that more generic -- so it would be RunWorkerThreads<T>(int count, Func<T> getNextItem, Action<T> workDoer), but that's up to you.
Note that Parallel.ForEach() does essentially this though plus give ways of monitoring and aborting and such, so there's probably no need to reinvent the wheel here.
You can check the thread's ThreadState property and when it's Stopped you can kick off a new thread.
http://msdn.microsoft.com/en-us/library/system.threading.thread.threadstate.aspx
http://msdn.microsoft.com/en-us/library/system.threading.threadstate.aspx
Get each thread, as the last thing it does, to signal that it is done. That way there needs to be no waiting at all.
Even better move to a higher level of abstraction, e.g. threadpool and let someone else worry about such details.