Im trying to figure out the output of this code:
Dictionary<int, MyRequest> request = new Dictionary<int, MyRequest>();
for (int i = 0; i < 1000; i++ )
{
request.Add(i, new MyRequest() { Name = i.ToString() });
}
var ids = request.Keys.ToList();
Parallel.For(0, ids.Count, (t) =>
{
var id = ids[t];
var b = request[id];
lock (b)
{
if (b.Name == 4.ToString())
{
Thread.Sleep(10000);
}
Console.WriteLine(b.Name);
}
});
Console.WriteLine("done");
Console.Read();
output:
789
800
875
.
.
.
4
5
6
7
done
MyRequest is just a dummy class used for demonstration (it is not doing anything but holding values). Is my lock blocking the execution or are the last 4 being put on their own thread?
This is a .NET 4.0 demo.
UPDATE
Ok I did figure out they were on teh same thread, but i would still like to know if the lock does anything to block execution. I cant imagine it does.
If ids does not contain duplicates, that lock won't block anything. But if there are duplicates in ids, then yes, there might be contention at the lock, as different threads fight for access to the same request.
Your lock will only be blocking execution if the ids line up such that you retrieve the same request more than once. Since different names are being printed each time, that shouldn't be a concern.
Parallel.For uses a thread pool to process your loop. As soon as one of its threads is free, it assigns it to the next element. This is non-deterministic, because you don't know how many threads there are in the pool, and you don't control the CPU time given to each thread. This means that some threads may finish sooner or later than you would "naturally" expect.
Your lock isn't doing anything. A lock blocks delimits sections of code that attempt to use the same object. In your case, you're not ever using the same object twice in the loop. The fact that the last IDs processed seem consistent is probably purely coincidental.
Related
I can already see it's not by the incorrect increments, but there's just one small piece of the puzzle I can't quite seem to catch.
We have the following code:
internal class StupidObject
{
static public SemaphoreSlim semaphore = new SemaphoreSlim(0, 100);
private int counter;
public bool MethodCall() => counter++ == 0;
public int GetCounter() => counter;
}
And the following test code to try and see if it's an atomic operation:
var sharedObj = new StupidObject();
var resultTasks = new Task[100];
for (int i = 0; i < 100; i++)
{
resultTasks[i] = Task.Run(async () =>
{
await StupidObject.semaphore.WaitAsync();
if (sharedObj.MethodCall())
{
Console.WriteLine("True");
};
});
}
Console.WriteLine("Done");
Console.ReadLine();
StupidObject.semaphore.Release(100);
Console.ReadLine();
Console.WriteLine(sharedObj.GetCounter());
Console.ReadLine();
I expect to see multiple True's written to the console, but I ever see a single one.
Why is that? By my understanding, a ++ operation reads the value, increments the read value, and then stores that value to the variable.
Those are 3 operations. If we had a race condition, where thread A did the following:
Reads value to be 0.
Increments read value by 1.
And another thread B did the same things, but beat thread A to the third operation as following:
Writes read value to variable.
When A finishes writing the incremented read value, it should print back 0, same with thread B after it has done its write operation.
Am I missing something at the design aspect of things, or is my test not good enough to make this exact situation come to fruition?
Example without the Task Parallel Library (still yields a single True to the console):
var sharedObj = new StupidObject();
var resultTasks = new Thread[10000];
for (int i = 0; i < 10000; i++)
{
resultTasks[i] = new Thread(() =>
{
StupidObject.semaphore.Wait();
if (sharedObj.MethodCall())
{
Console.WriteLine("True");
};
});
resultTasks[i].IsBackground = false;
resultTasks[i].Start();
}
Console.WriteLine("Done");
Console.ReadLine();
StupidObject.semaphore.Release(10000);
What Liam said about Console.WriteLine is possible, but also there's another thing.
Starting Tasks doesn't equal starting threads, and even starting threads doesn't guarantee that all threads will begin immediatelly. Starting 100 short tasks probably won't even fill .Net's thread pool significantly, because those tasks end quickly and thread pool's manager probably won't start more than 3-5 threads. That's not the "immediate" and "parallel" you'd like to see when you want to start parallel 100 increments to race with each other, right? Remember that Tasks are queued first, then assigned to threads.
Note that the StupidObject's counter starts with zero and that's the ONLY MOMENT EVER that the value is zero. If ANY thread wins the race and successfully writes an update to that integer, you'll get FALSE in all future tasks, because it's already 1.
And if there are many tasks on the thread pool's queue, something first has to notice that fact. At program's start, thread pool lacks threads. They are not started in dozens right at program start. They are started on demand. Most probably you fill up the queue with 100 tasks, threadpool's thread is created, picks first task, bumps counter to 1, then maybe thread pool starts new threads to consume tasks faster.
To get a bit better image what's happening, instead of printing out 'true', collect values observed by return counter++: let each task run, finish, store its value in Task's .Result, then run threads/tasks, then wait for all of then to stop, then collect .Results and write a histogram of those values. Even if you don't see 5 zeros, maybe you will see 3 ones, 7 twos, 2 threes and so on.
I tried to create a multithreaded dice rolling simulation - just for curiosity, the joy of multithreaded progarmming and to show others the effects of "random results" (many people can't understand that if you roll a laplace dice six times in a row and you already had 1, 2, 3, 4, 5 that the next roll is NOT a 6.). To show them the distribution of n rolls with m dice I created this code.
Well, the result is fine BUT even though I create a new task for each dice the program runs single threaded.
Multithreaded would be reasonable to simulate "millions" of rerolls with 6 or more dice as the time to finish will grow rapidly.
I read several examples from msdn that all indicate that there should be several tasks running simultanously.
Can someone please give me a hint, why this code does not utilize many threads / cores? (Not even, when I try to run it for 400 dice at once and 10 Million rerolls)
At first I initialize the jagged Array that stores the results. 1st dimension: 1 Entry per dice, the second dimension will be the distribution of eyes rolled with each dice.
Next I create an array of tasks that each return an array of results (the second dimension, as described above)
Each of these arrays has 6 entries that represent eachs side of a laplace W6 dice. If the dice roll results in 1 eye the first entry [0] is increased by +1. So you can visualize how often each value has been rolled.
Then I use a plain for-loop to start all threads. There is no indication to wait for a thread until all are started.
At the end I wait for all to finish and sum up the results. It does not make any difference if change
Task.WaitAll(tasks); to
Task.WhenAll(tasks);
Again my quation: Why doesn't that code utilize more than one core of my CPU? What do I have to change?
Thanks in advance!
Here's the code:
private void buttonStart_Click(object sender, RoutedEventArgs e)
{
int tries = 1000000;
int numberofdice = 20 ;
int numberofsides = 6; // W6 = 6
var rnd = new Random();
int[][] StoreResult = new int[numberofdice][];
for (int i = 0; i < numberofdice; i++)
{
StoreResult[i] = new int[numberofsides];
}
Task<int[]>[] tasks = new Task<int[]>[numberofdice];
for (int ctr = 0; ctr < numberofdice; ctr++)
{
tasks[ctr] = Task.Run(() =>
{
int newValue = 0;
int[] StoreTemp = new int[numberofsides]; // Array that represents how often each value appeared
for (int i = 1; i <= tries; i++) // how often to roll the dice
{
newValue = rnd.Next(1, numberofsides + 1); // Roll the dice; UpperLimit for random int EXCLUDED from range
StoreTemp[newValue-1] = StoreTemp[newValue-1] + 1; //increases value corresponding to eyes on dice by +1
}
return StoreTemp;
});
StoreResult[ctr] = tasks[ctr].Result; // Summing up the individual results for each dice in an array
}
Task.WaitAll(tasks);
// do something to visualize the results - not important for the question
}
}
The issue here is tasks[ctr].Result. The .Result portion itself waits for the function to complete before storing the resulting int array into StoreResult. Instead, make a new loop after Task.WaitAll to get your results.
You may consider doing a Parallel.Foreach loop instead of manually creating separate tasks for this.
As others have indicated, when you try to aggregate this you just end up waiting for each individual task to finish, so this isn't actually multi-threaded.
Very important note: The C# random number generator is not thread safe (see also this MSDN blog post for discussion on the topic). Don't share the same instance between multiple threads. From the documentation:
...Random objects are not thread safe. If your app calls Random
methods from multiple threads, you must use a synchronization object
to ensure that only one thread can access the random number generator
at a time. If you don't ensure that the Random object is accessed in a
thread-safe way, calls to methods that return random numbers return 0.
Also, just to be nit-picky, using a Task is not really the same thing as doing multithreading; while you are, in fact, doing multithreading here, it's also possible to do purely asynchronous, non-multithreaded code with async/await. This is used mostly for I/O-bound operations where it's largely pointless to create a separate thread just to wait for a result (but it's desirable to allow the calling thread to do other work while it's waiting for the result).
I don't think you should have to worry about thread safety while assigning to the main array (assuming that each thread is assigning only to a specific index in the array and that no one else is assigning to the same memory location); you only have to worry about locking when multiple threads are accessing/modifying shared mutable state at the same time. If I'm reading this correctly, this is mutable state (but it's not shared mutable state).
I have been rewriting some process intensive looping to use TPL to increase speed. This is the first time I have tried threading, so want to check what I am doing is the correct way to do it.
The results are good - processing the data from 1000 Rows in a DataTable has reduced processing time from 34 minutes to 9 minutes when moving from a standard foreach loop into a Parallel.ForEach loop. For this test, I removed non thread safe operations, such as writing data to a log file and incrementing a counter.
I still need to write back into a log file and increment a counter, so i tried implementing a lock which encases the streamwriter/increment code block.
FileStream filestream = new FileStream("path_to_file.txt", FileMode.Create);
StreamWriter streamwriter = new StreamWriter(filestream);
streamwriter.AutoFlush = true;
try
{
object locker = new object();
// Lets assume we have a DataTable containing 1000 rows of data.
DataTable datatable_results;
if (datatable_results.Rows.Count > 0)
{
int row_counter = 0;
Parallel.ForEach(datatable_results.AsEnumerable(), data_row =>
{
// Process data_row as normal.
// When ready to write to log, do so.
lock (locker)
{
row_counter++;
streamwriter.WriteLine("Processing row: {0}", row_counter);
// Write any data we want to log.
}
});
}
}
catch (Exception e)
{
// Catch the exception.
}
streamwriter.Close();
The above seems to work as expected, with minimal performance costs (still 9 minutes execution time). Granted, the actions contained in the lock are hardly significant themselves - I assume that as the time taken to process code within the lock increases, the longer the thread is locked for, the more it affects processing time.
My question: is the above an efficient way of doing this or is there a different way of achieving the above that is either faster or safer?
Also, lets say our original DataTable actually contains 30000 rows. Is there anything to be gained by splitting this DataTable into chunks of 1000 rows each and then processing them in the Parallel.ForEach, instead of processing all 300000 rows in one go?
Writing to the file is expensive, you're holding a exclusive lock while writing to the file, that's bad. It's going to introduce contention.
You could add it in a buffer, then write to the file all at once. That should remove contention and provide way to scale.
if (datatable_results.Rows.Count > 0)
{
ConcurrentQueue<string> buffer = new ConcurrentQueue<string>();
Parallel.ForEach(datatable_results.AsEnumerable(), (data_row, state, index) =>
{
// Process data_row as normal.
// When ready to write to log, do so.
buffer.Enqueue(string.Format( "Processing row: {0}", index));
});
streamwriter.AutoFlush = false;
string line;
while (buffer.TryDequeue(out line))
{
streamwriter.WriteLine(line);
}
streamwriter.Flush();//Flush once when needed
}
Note that you don't need to maintain a loop counter,
Parallel.ForEach provides you one. Difference is that it is not
the counter but index. If I've changed the expected behavior you can
still add the counter back and use Interlocked.Increment to
increment it.
I see that you're using streamwriter.AutoFlush = true, that will hurt performance, you can set it to false and flush it once you're done writing all the data.
If possible, wrap the StreamWriter in using statement, so that you don't even need to flush the stream(you get it for free).
Alternatively, you could look at the logging frameworks which does their job pretty well. Example: NLog, Log4net etc.
You may try to improve this, if you avoid logging, or log into only thread specific log file (not sure if that makes sense to you)
TPL start as many threads as many cores you have Does Parallel.ForEach limits the number of active threads?.
So what you can do is:
1) Get numbers of core on target machine
2) Create a list of counters, with as many elements inside as many cores you have
3) Update counter for every core
4) Sum all them up after parallel execution terminates.
So, in practice :
//KEY(THREAD ID, VALUE: THREAD LOCAL COUNTER)
Dictionary<int,int> counters = new Dictionary<int, int>(NUMBER_OF_CORES);
....
Parallel.ForEach(datatable_results.AsEnumerable(), data_row =>
{
// Process data_row as normal.
// When ready to write to log, do so.
//lock (locker) //NO NEED FOR LOCK, EVERY THREAD UPDATES ITS _OWN_ COUNTER
//{
//row_counter++;
counters[Thread.CurrentThread.ManagedThreadId].Value +=1;
//NO WRITING< OR WRITING THREAD SPECIFIC FILE ONLY
//streamwriter.WriteLine("Processing row: {0}", row_counter);
//}
});
....
//AFTER EXECUTION OF PARALLEL LOOP SUM ALL COUNTERS AND GET TOTAL OF ALL THREADS.
The benefit of this that no locking envolved at all, which will drammatically improve performance. When you use .net concurent collections, they are always use some kind of locking inside.
This is naturally a basic idea, may not work as it expected if you copy paste. We are talking about multi threading , which is always a hard topic. But, hopefully, it provides to you some ideas to relay on.
First of all, it takes about 2 seconds to process a row in your table and perhaps a few milliseconds to increment the counter and write to the log file. With the actual processing being 1000x more than the part you need to serialize, the method doesn't matter too much.
Furthermore, the way you have implemented it is perfectly solid. There are ways to optimize it, but none that are worth implementing in your situation.
One useful way to avoid locking on the increment is to use Interlocked.Increment. It is a bit slower than x++ but much faster than lock {x++;}. In your case, though, it doesn't matter.
As for the file output, remember that the output is going to be serialized anyway, so at best you can minimize the amount of time spent in the lock. You can do this by buffering all of your output before entering the lock, then just perform the write operation inside the lock. You probably want to do async writes to avoid unnecessary blocking on I/O.
You can transfer the parallel code in new method. For example :
// Class scope
private string GetLogRecord(int rowCounter, DataRow row)
{
return string.Format("Processing row: {0}", rowCounter); // Write any data we want to log.
}
//....
Parallel.ForEach(datatable_results.AsEnumerable(), data_row =>
{
// Process data_row as normal.
// When ready to write to log, do so.
lock (locker)
row_counter++;
var logRecord = GetLogRecord(row_counter, data_row);
lock (locker)
streamwriter.WriteLine(logRecord);
});
This is my code that uses a parallel for. The concept is similar, and perhaps easier for you to implement. FYI, for debugging, I keep a regular for loop in the code and conditionally compile the parallel code. Hope this helps. The value of i in this scenario isn't the same as the number of records processed, however. You could create a counter and use a lock and add values for that. For my other code where I do have a counter, I didn't use a lock and just allowed the value to be potentially off to avoid the slower code. I have a status mechanism to indicate number of records processed. For my implementation, the slight chance that the count is not an issue - at the end of the loop I put out a message saying all the records have been processed.
#if DEBUG
for (int i = 0; i < stend.PBBIBuckets.Count; i++)
{
//int serverIndex = 0;
#else
ParallelOptions options = new ParallelOptions();
options.MaxDegreeOfParallelism = m_maxThreads;
Parallel.For(0, stend.PBBIBuckets.Count, options, (i) =>
{
#endif
g1client.Message request;
DataTable requestTable;
request = new g1client.Message();
requestTable = request.GetDataTable();
requestTable.Columns.AddRange(
Locations.Columns.Cast<DataColumn>().Select(x => new DataColumn(x.ColumnName, x.DataType)).ToArray
());
FillPBBIRequestTables(requestTable, request, stend.PBBIBuckets[i], stend.BucketLen[i], stend.Hierarchies);
#if DEBUG
}
#else
});
#endif
I am currently reading this excellent article on threading and read the following text:
Thread.Sleep(0) relinquishes the thread’s current time slice immediately, voluntarily handing over the CPU to other threads.
I wanted to test this and below is my test code:
static string s = "";
static void Main(string[] args)
{
//Create two threads that append string s
Thread threadPoints = new Thread(SetPoints);
Thread threadNewLines = new Thread(SetNewLines);
//Start threads
threadPoints.Start();
threadNewLines.Start();
//Wait one second for threads to manipulate string s
Thread.Sleep(1000);
//Threads have an infinite loop so we have to close them forcefully.
threadPoints.Abort();
threadNewLines.Abort();
//Print string s and wait for user-input
Console.WriteLine(s);
Console.ReadKey();
}
The functions that threadPoints and threadNewLines run:
static void SetPoints()
{
while(true)
{
s += ".";
}
}
static void SetNewLines()
{
while(true)
{
s += "\n";
Thread.Sleep(0);
}
}
If I understand Thread.Sleep(0) correctly, the output should be something like this:
............ |
.............. |
................ | <- End of console
.......... |
............. |
............... |
But I get this as output:
....................|
....................|
.... |
|
|
....................|
....................|
................. |
|
Seeing as the article mentioned in the beginning of the post is highly recommended by many programmers, I can only assume that my understanding of Thread.Sleep(0) is wrong. So if someone could clarify, I'd be much obliged.
What thread.sleep(0) is to free the cpu to handle other threads, but that doesn't mean that another thread couldn't be the current one. If you're trying to send the context to another thread, try to use some sort of signal.
If you have access to a machine (or perhaps a VM) with only a single core/processor, try running your code on that machine. You may be surprised with how the results vary. Just because two threads refer to the same variable "s", does not mean they actually refer to the same value at the same time, due to various levels of caching that can occur on modern multi-core (and even just parallel pipeline) CPUs. If you want to see how the yielding works irrespective of the caching issues, try wrapping each s += expression inside a lock statement.
If you'd expand the width of the console to be 5 time larger than current then you'd see what you expect, lines not reaching the console width. The problem is one time slice is actually very long. So, to have the expected effect with normal console with you'd have to slow down the Points thread, but without using Sleep. Instead of while (true) loop try this
for (int i = 0;; i++)
{
if (int % 10 == 0)
s += ".";
}
To slow down the thread even more replace number 10 with bigger number.
The next thread the processor handles is random thread and it even could be the same thread you just called Thread.Sleep(0). To ensure that next thread will be not the same thread you can call Thread.Yield() and check it's return result - if os has another thread that can run true will be returned else false.
You should (almost) never abort threads. The best practice is to signal them to die (commit suicide).
This is normally accomplished by setting some boolean variable and the threads should inspect its value to whether continue or not its execution.
You are setting a string variable named "s". You will incur in race conditions. String is not thread safe. You can wrap the operations that manipulate it in a lock or use a built-in type that is thread-safe.
Always pay attention, in the documentation, to know if the types you use are thread-safe.
Because of this you can't rely on your results because your program is not thread-safe.
If you run the program several times my guess is that you'll get different outputs.
Note: When using a boolean to share some state to cancel threads, make sure it is marked as volatile. JIT might optimize the code and never looks at its changed value.
I have a static array variable which is being shared by two threads.
I'd like to understand what happens if I assign the array variable to another array object in Thread1 while Thread2 is iterating over the array.
I.e
In thread 1
MyStaticClass.MyArray = SomeOtherArray
While in Thread 2:
for (int i = 0; i < MyStaticClass.MyArray.length; i++)
{
//do something with the i'th element
}
Assuming MyStaticClass.MyArray is just a field or simple property.
Nothing good or predictable will happen that's for sure.
I'd say most likely are:
The loop may read one half of old array and rest from new array
The new array may be shorter than the last giving a exception when you access [i]
Or Thread 2 may actually completely ignore the change to the array! And what's worse, this behaviour may be different in Release build to Debug.
The last situation is due to compiler optimisations and/or the way the memory model works in .net (and other languages like Java BTW). There's a whole keyword to get around this issue (volatile) http://igoro.com/archive/volatile-keyword-in-c-memory-model-explained/
To prevent this, you want to use a lock to lock the critical section. Basically wrapping your itteration in a lock will prevent the other thread from overwriting the array while you are processing it
The lock keyword ensures that one thread does not enter a critical section of code while another thread is in the critical section. If another thread tries to enter a locked code, it will wait, block, until the object is released.
https://msdn.microsoft.com/en-us/library/c5kehkcz.aspx
The condition in your for loop is evaluated for every iteration. So if another thread changes the reference in MyStaticClass.MyArray, you use this new reference in the next iteration.
For example, this code:
int[] a = {1,2,3,4,5};
int[] b = {10, 20, 30, 40, 50, 60, 70};
for (int i = 0; i < a.Length; i++)
{
Console.WriteLine(a[i]);
a = b;
}
gives this output:
1
20
30
40
50
60
70
To avoid this, you could use foreach:
foreach(int c in a)
{
Console.WriteLine(c);
a = b;
}
gives:
1
2
3
4
5
because foreach is translated so that it calls a.GetEnumerator() once and then uses this enumerator (MoveNext() and Current) for all iterations, not accessing a again.
There will be no consequences other than thread 2 at some point starting to work on the new array instead of the old. When exactly this happens is more or less random, and as weston writes in his comment, it may never even happen at all.
If the new array has fewer elements than the old one this can (but doesn't have to) cause a runtime exception.