Thread-safe collection for rotating number reading - c#

I'm wondering if there's an inbuilt collection (or any way to make a custom one) in C# that can be used to emit numbers in a rotating/cyclic fashion (see example below), and is thread-safe (so each thread gets the next number in the collection).
Collection with 5 sequential numbers:
Thread 1 read: return value 1
Thread 2 read: return value 2
Thread 3 read: return value 3
Thread 4 read: return value 4
Thread 5 read: return value 5
Thread 6 read: return value 1
Thread 7 read: return value 2
Thread 8 read: return value 3
and so on.
Basically, the next number emitted by the collection (when read by a thread) should be one after the previous one, and it should restart from the beginning at the end of the number set.

You can greate a method, that will return infinate enumerable:
private object _lock = new object();
private int i = 0;
private int max = 6;
public IEnumerable<int> GetNumbers()
{
while (true)
{
lock (_lock)
{
i++;
if (i == max)
i = 1;
yield return i;
}
}
}
And get them:
var numbers = GetNumbers().Take(1000).ToArray();

Related

Why does List<>.Add from multiple threads lead to varying results? [duplicate]

This question already has answers here:
Race conditions when adding to, but not reading from, a List<T> (or Stack or Queue) - what happens?
(3 answers)
Closed 2 years ago.
I was wondering, if List<T> is thread-safe and read that several readers are no problem, but more than one writer may cause issues. So I wrote the following test to see what actually happens.
[TestClass]
public class ListConcurrency
{
[TestMethod]
public void MultipleWritersTest()
{
var taskCnt = 10;
var addCnt = 100;
var list = new List<object>();
var tasks = new List<Task>();
for (int i = 0; i < taskCnt; i++)
{
var iq = i;
tasks.Add(Task.Run(() =>
{
Console.WriteLine("STARTING : " + iq);
for (int j = 0; j < addCnt; j++)
{
try
{
list.Add(new object());
}
catch (Exception e)
{
Console.WriteLine(e);
}
}
Console.WriteLine("FINISHING: " + iq);
}));
}
Task.WhenAll(tasks).Wait();
Console.WriteLine("FINISHED: " + list.Count);
}
}
And here is an example output:
STARTING : 0
FINISHING: 0
STARTING : 1
FINISHING: 1
STARTING : 8
STARTING : 9
FINISHING: 9
FINISHING: 8
STARTING : 2
FINISHING: 2
STARTING : 7
STARTING : 3
FINISHING: 3
FINISHING: 7
STARTING : 4
FINISHING: 4
STARTING : 6
FINISHING: 6
STARTING : 5
FINISHING: 5
FINISHED: 979
I was surprised by two things:
Running the test several times shows that sometimes the resulting list count is not the expected 1000 (=10 x 100), but less.
No exceptions occur during adding.
If both would happen (expections and wrong item count) it would make sense... Is this simply the way List<T> demonstrates its non-thread-safety?
EDIT: My opening line was badly phrased, I know that List<T> is not thread-safe (e.g. for iterating), but I wanted to see what happens, if it is "abused" in this way. As I wrote in a comment below, the results (that no exceptions will be thrown) may be useful for others when debugging.
If you check the source code of List you will see that internally it operates on array. The Add method expands array size and inserts new item:
// Adds the given object to the end of this list. The size of the list is
// increased by one. If required, the capacity of the list is doubled
// before adding the new element.
//
public void Add(T item)
{
if (_size == _items.Length) EnsureCapacity(_size + 1);
_items[_size++] = item;
_version++;
}
Now imagine you have array with size of 10 and 2 threads inserts at the same time - both expands array to 11 and one thread inserts at index 11 and other overwrites item at index 11. And that's why you get list count of 11 not 12 and you will loose one item.
Okay, let's look at Add1 and consider what happens when multiple threads are accessing it:
public void Add(T item) {
if (_size == _items.Length) EnsureCapacity(_size + 1);
_items[_size++] = item;
_version++;
}
Looks alright. But let's consider what happens for a particularly unlucky thread that has already gone past line 1 of this code before another thread manages to execute line 2, resulting in _size becoming equal to _items.Length. Our unlucky thread is now going to walk off the end of the _items array and throw an exception.
So, despite your "proof" that it won't throw an exception, I found an obvious race that would lead to one after about 2 minutes of inspecting the code.
1Code taken from the reference source which of course means that it may not be exactly the same as the code that is actually running, because the developers are free to change their implementations, only respecting documented guarantees.
List<T> isn't threadsafe and you'll need to work with a different collection. You should try using ConcurrentBag<T> or any of the other collection types specified here in the documentation.

LINQ query and sub-query enumeration count in C#?

suppose I have this query :
int[] Numbers= new int[5]{5,2,3,4,5};
var query = from a in Numbers
where a== Numbers.Max (n => n) //notice MAX he should also get his value somehow
select a;
foreach (var element in query)
Console.WriteLine (element);
How many times does Numbers is enumerated when running the foreach ?
how can I test it ( I mean , writing a code which tells me the number of iterations)
It will be iterated 6 times. Once for the Where and once per element for the Max.
The code to demonstrate this:
private static int count = 0;
public static IEnumerable<int> Regurgitate(IEnumerable<int> source)
{
count++;
Console.WriteLine("Iterated sequence {0} times", count);
foreach (int i in source)
yield return i;
}
int[] Numbers = new int[5] { 5, 2, 3, 4, 5 };
IEnumerable<int> sequence = Regurgitate(Numbers);
var query = from a in sequence
where a == sequence.Max(n => n)
select a;
It will print "Iterated sequence 6 times".
We could make a more general purpose wrapper that is more flexible, if you're planning to use this to experiment with other cases:
public class EnumerableWrapper<T> : IEnumerable<T>
{
private IEnumerable<T> source;
public EnumerableWrapper(IEnumerable<T> source)
{
this.source = source;
}
public int IterationsStarted { get; private set; }
public int NumMoveNexts { get; private set; }
public int IterationsFinished { get; private set; }
public IEnumerator<T> GetEnumerator()
{
IterationsStarted++;
foreach (T item in source)
{
NumMoveNexts++;
yield return item;
}
IterationsFinished++;
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
public override string ToString()
{
return string.Format(
#"Iterations Started: {0}
Iterations Finished: {1}
Number of move next calls: {2}"
, IterationsStarted, IterationsFinished, NumMoveNexts);
}
}
This has several advantages over the other function:
It records both the number of iterations started, the number of iterations that were completed, and the total number of times all of the sequences were incremented.
You can create different instances to wrap different underlying sequences, thus allowing you to inspect multiple sequences per program, instead of just one when using a static variable.
Here is how you can estimate a quick count of the number of times the collection is enumerated: wrap your collection in a CountedEnum<T>, and increment counter on each yield return, like this --
static int counter = 0;
public static IEnumerable<T> CountedEnum<T>(IEnumerable<T> ee) {
foreach (var e in ee) {
counter++;
yield return e;
}
}
Then change your array declaration to this,
var Numbers= CountedEnum(new int[5]{5,2,3,4,5});
run your query, and print the counter. For your query, the code prints 30 (link to ideone), meaning that your collection of five items has been enumerated six times.
Here is how you can check the count
void Main()
{
var Numbers= new int[5]{5,2,3,4,5}.Select(n=>
{
Console.Write(n);
return n;
});
var query = from a in Numbers
where a== Numbers.Max (n => n)
select a;
foreach (var element in query)
{
var v = element;
}
}
Here is output
5 5 2 3 4 5 2 5 2 3 4 5 3 5 2 3 4 5 4 5 2 3 4 5 5 5 2 3 4 5
The number of iteration has to be equal to query.Count().
So to the count of the elements in the result of the first query.
If you're asking about something else, please clarify.
EDIT
After clarification:
if you're searching for total count of the iteration in the code provided, there will be 7 iterations (for this concrete case).
var query = from a in Numbers
where a== Numbers.Max (n => n) //5 iterations to find MAX among 5 elements
select a;
and
foreach (var element in query)
Console.WriteLine (element); //2 iterations over resulting collection(in this question)
How many times does Numbers is enumerated when running the foreach
Loosely speaking, your code is morally equivalent to:
foreach(int a in Numbers)
{
// 1. I've gotten rid of the unnecessary identity lambda.
// 2. Note that Max works by enumerating the entire source.
var max = Numbers.Max();
if(a == max)
Console.WriteLine(a);
}
So we enumerate the following times:
One enumeration of the sequence for the outer loop (1).
One enumeration of the sequence for each of its members (Count).
So in total, we enumerate Count + 1 times.
You could bring this down to 2 by hoisting the Max query outside the loop by introducing a local.
how can I test it ( I mean , writing a code which tells me the number
of iterations)
This wouldn't be easy with a raw array. But you could write your own enumerable implementation (that perhaps wrapped an array) and add some instrumentation to the GetEnumerator method. Or if you want to go deeper, go the whole hog and write a custom enumerator with instrumentation on MoveNext and Current as well.
Count via public property also yields 6.
private static int ncount = 0;
private int[] numbers= new int[5]{5,2,3,4,5};
public int[] Numbers
{
get
{
ncount++;
Debug.WriteLine("Numbers Get " + ncount.ToString());
return numbers;
}
}
This brings the count down to 2.
Makes sense but I would not have thought of it.
int nmax = Numbers.Max(n => n);
var query = from a in Numbers
where a == nmax //notice MAX he should also get his value somehow
//where a == Numbers.Max(n => n) //notice MAX he should also get his value somehow
select a;
It will be iterated 6 times. Once for the Where and once per element for the Max.
Define and initialize a count variable outside the foreach loop and increment the count variable as count++ inside the loop to get the number of times of enumeration.

Change the priority in a custom priority queue

I followed the directions given in this question (the answer by Jason) in order to write my PriorityQueue<T> using a SortedList. I understand that the count field within this class is used to ensure unique priorities and to preserve the enqueue order among the same priority.
However, when count reaches its maximum value and I sum 1 to it, the latter will starts again from 0, so the priority of the subsequent items would be higher than the priority of the previous items. Using this approach I could need for a way to "securely" reset the counter count... In fact, suppose to have the following queue state (in the format priority | count | item):
0 | 123 | A
0 | 345 | B
1 | 234 | C
2 | 200 | D
Now suppose the counter limit has reached, so I have to reset it to 0: as a consequence, the next inserted item will have counter 0: for example, if I insert an element with priority equal to 1, it will be wrongly inserted before 1 | 234 | D
0 | 123 | A
0 | 345 | B
1 | 000 | new element
1 | 234 | C
2 | 200 | D
The problem of the priority can be solved by implementing an heap: I created an Heap class, then I used Heap<KeyValuePair<TPriority, TElement> and a custom PriorityComparer in order to sort elements by TPriority.
Given TPriority as an int and TElement as a string, the PriorityComparer is as follows:
public class MyComparer : IComparer<KeyValuePair<int, string>>
{
public int Compare(KeyValuePair<int, string> x, KeyValuePair<int, string> y)
{
return x.Key.CompareTo(y.Key);
}
}
...
int capacity = 10;
Heap<KeyValuePair<int, string>> queue;
queue = new Heap<KeyValuePair<int, string>>(capacity, new PriorityComparer());
...
UPDATE
In this way (using the PriorityComparer), I have succeeded to implement a priority queue.
Now I'd like to add support to modify its behavior at runtime, ie switch from FIFO to priority sorting and vice-versa. Since my implementation of priority queue has an IComparer field, I think it is sufficient to add a Comparer property to edit this field, like as follows:
public IComparer
{
set
{
this._comparer = value;
}
}
In the meantime I thought I'd take a different approach: instead of using a binary heap to manage priorities, I could wrap different queues (each queue refers to a given priority) as follows.
public class PriorityQueue<T, int>
{
private Queue<T> _defaultQueue;
private bool _priority;
private SortedList<int, Queue<T>> _priorityQueues;
public PriorityQueue(int capacity)
{
this._defaultQueue = new Queue<T>(capacity);
this._priority = false;
this._priorityQueues = new SortedList<int, Queue<T>>(0);
}
public void PriorityEnable()
{
this._priority = true;
}
public void PriorityDisable()
{
this._priority = false;
}
public void Enqueue(T item)
{
if (this._priority)
{
// enqueue to one of the queues
// with associated priority
// ...
}
else this._defaultQueue.Enqueue(item);
}
public T Dequeue()
{
if (this._priority)
{
// dequeue from one of the queues
// with associated priority and
// return
// ...
}
return this._defaultQueue.Dequeue();
}
}
How to manage the transition from FIFO mode to priority mode when there are still elements in the default queue? I could copy them in the priority queues based on the item priority... Other better solutions?
How to manage the transition from priority mode to FIFO mode? In this case, I would have several priority queues, which may contain elements, but no longer have to manage them according to priority and not even know the original order of arrival...
How can I manage the capacity of the different queues?
What about the performances of the above two solutions? Which does use more memory?
You could "cheat" and use BigInteger so you never "run out of numbers". This of course leads to gradual deterioration of performance over time, but probably not significant enough to matter.
Combine that with a heap-based priority queue and you are set!
Don't try to "switch from FIFO to priority sorting and vice-versa" - simply put elements in both data structures appropriate for the task (Queue and priority queue).
Using both Queue and Priority Queue is what I would do.
But if you must...
Instead of one key use 2 keys for an element.
The first key priority will be the priority.
The second key time will be a counter that will be like a timestamp.
For the regular behavior use the priority key.
When the heap is full, HEAPIFY it by the time key.
Then remove n needed elements.
Now HEAPIFY it again with the prioritykey to return to the regular behavior.
EDIT: You have kind of changed what you are asking with your edits. You went from asking one question to doing a new approach and asking a new question. Should probably open a new question for your new approach, as this one is now confusing as to what answer/response is to what question/comment. I believe your original question about sorting equal priorities has been answered.
You could use a long to allow for more values. You will always reach an end eventually, so you would need to use a new pattern for unique values or 'recount' the items when the max is reached (loop through each and reset the unique count value).
Maybe use a GUID for each item instead?
Guid.NewGuid()
EDIT:
To add after your edit: If you want the new 1 to be placed after the existing, In the Compare override, return a greater than result (1) when the values are equal. That way the following will happen:
1 > 0, return greater (1), continue
1 > 0, return greater (1), continue
1 == 1, return greater (1), continue
1 < 2, return less than (-1), insert
EDIT 2:
If the second parameter is only meant to be a unique value, you could always use a string and set the value as numeric strings instead. That way you will never reach a cap, would just have to parse the string accordingly. You can use leading alpha values that represent a new set.
I have not tested this code, just an idea as to what you could do.
static string leadingStr = "";
static char currentChar = 'a';
static Int32 currentId = Int32.MinValue;
static string getNextId()
{
if (currentId >= Int32.MaxValue)
{
currentId = Int32.MinValue;
if (currentChar >= 'z')
{
currentChar = 'a';
leadingStr = leadingStr.Insert(0, "X");
}
else
currentChar++;
}
else
currentId++;
return String.Format("{0}{1}-{2}", leadingStr, currentChar, currentId);
}
EDIT 3: Reset Values
static Int64 currentValue = Int64.MinValue;
static void AddItem(object item)
{
if (currentValue == Int64.MaxValue)
RecountItems();
item.counter = currentValue++;
SortedList.Add(item);
}
static void RecountItems()
{
currentValue = 0;
foreach (var item in SortedList)
{
item.counter = currentValue++;
}
}
Edit 4: For your second question:
You could use a FIFO stack as you normally would, but also have a priority List that only stores the unique ID of the items. However you would then need to remove the item from the list every time you remove from the FIFO stack.
static Object RemoveNextFIFO()
{
if (fifoList.Count > 0)
{
var removedItem = fifoList[0];
fifoList.RemoveAt(0);
RemoveItemFromPriority(removedItem);
return removedItem;
}
}
static void RemoveItemFromPriority(Object itemToRemove)
{
foreach (var counter in priorityQueue)
{
if (counter == itemToRemove.counter)
{
priorityQueue.remove(item);
break;
}
}
}
static Object RemoveFromFIFO(int itemCounter)
{
foreach (var item in fifoList)
{
if (item.counter == itemCounter)
{
fifoList.Remove(item);
return item;
}
}
}
static Object RemoveNextPriority()
{
if (priorityQueue.Count > 0)
{
var counter = priorityQueue.Pop();
return RemoveFromFIFO(counter);
}
}

Why c# doesn't preserve the context for an anonymous delegate calls?

I have the following method:
static Random rr = new Random();
static void DoAction(Action a)
{
ThreadPool.QueueUserWorkItem(par =>
{
Thread.Sleep(rr.Next(200));
a.Invoke();
});
}
now I call this in a for loop like this:
for (int i = 0; i < 10; i++)
{
var x = i;
DoAction(() =>
{
Console.WriteLine(i); // scenario 1
//Console.WriteLine(x); // scenario 2
});
}
in scenario 1 the output is: 10 10 10 10 ... 10
in scenario 2 the output is: 2 6 5 8 4 ... 0 (random permutation of 0 to 9)
How do you explain this? Is c# not supposed to preserve variables (here i) for the anonymous delegate call?
The problem here is that there is one i variable and ten instances / copies of x. Each lambda gets a reference to the single variable i and one of the instances of x. Every x is only written to once and hence each lambda sees the one value which was written to the value it references.
The variable i is written to until it reaches 10. None of the lambdas run until the loop completes so they all see the final value of i which is 10
I find this example is a bit clearer if you rewrite it as follows
int i = 0; // Single i for every iteration of the loop
while (i < 10) {
int x = i; // New x for every iteration of the loop
DoAction(() => {
Console.WriteLine(i);
Console.WriteLine(x);
});
i++;
};
DoAction spawns the thread, and returns right away. By the time the thread awakens from its random sleep, the loop will be finished, and the value of i will have advanced all the way to 10. The value of x, on the other hand, is captured and frozen before the call, so you will get all values from 0 to 9 in a random order, depending on how long each thread gets to sleep based on your random number generator.
I think you'll get the same result with java or any Object oriented Language (not sure but here it seems logical).
The scope of i is for the whole loop and the scope of x is for each occurrence.
Resharper helps you top spot this kind of problem.

Can a C# blocking FIFO queue leak messages?

I'm working on an academic open source project and now I need to create a fast blocking FIFO queue in C#. My first implementation simply wrapped a synchronized queue (w/dynamic expansion) within a reader's semaphore, then I decided to re-implement in the following (theorically faster) way
public class FastFifoQueue<T>
{
private T[] _array;
private int _head, _tail, _count;
private readonly int _capacity;
private readonly Semaphore _readSema, _writeSema;
/// <summary>
/// Initializes FastFifoQueue with the specified capacity
/// </summary>
/// <param name="size">Maximum number of elements to store</param>
public FastFifoQueue(int size)
{
//Check if size is power of 2
//Credit: http://stackoverflow.com/questions/600293/how-to-check-if-a-number-is-a-power-of-2
if ((size & (size - 1)) != 0)
throw new ArgumentOutOfRangeException("size", "Size must be a power of 2 for this queue to work");
_capacity = size;
_array = new T[size];
_count = 0;
_head = int.MinValue; //0 is the same!
_tail = int.MinValue;
_readSema = new Semaphore(0, _capacity);
_writeSema = new Semaphore(_capacity, _capacity);
}
public void Enqueue(T item)
{
_writeSema.WaitOne();
int index = Interlocked.Increment(ref _head);
index %= _capacity;
if (index < 0) index += _capacity;
//_array[index] = item;
Interlocked.Exchange(ref _array[index], item);
Interlocked.Increment(ref _count);
_readSema.Release();
}
public T Dequeue()
{
_readSema.WaitOne();
int index = Interlocked.Increment(ref _tail);
index %= _capacity;
if (index < 0) index += _capacity;
T ret = Interlocked.Exchange(ref _array[index], null);
Interlocked.Decrement(ref _count);
_writeSema.Release();
return ret;
}
public int Count
{
get
{
return _count;
}
}
}
This is the classic FIFO queue implementation with static array we find on textbooks. It is designed to atomically increment pointers, and since I can't make the pointer go back to zero when reached (capacity-1), I compute modulo apart. In theory, using Interlocked is the same as locking before doing the increment, and since there are semaphores, multiple producers/consumers may enter the queue but only one at a time is able to modify the queue pointers.
First, because Interlocked.Increment first increments, then returns, I already understand that I am limited to use the post-increment value and start store items from position 1 in the array. It's not a problem, I'll go back to 0 when I reach a certain value
What's the problem with it?
You wouldn't believe that, running on heavy loads, sometimes the queue returns a NULL value. I am SURE, repeat, I AM SURE, that no method enqueues null into the queue. This is definitely true because I tried to put a null check in Enqueue to be sure, and no error was thrown. I created a test case for that with Visual Studio (by the way, I use a dual core CPU like maaaaaaaany people)
private int _errors;
[TestMethod()]
public void ConcurrencyTest()
{
const int size = 3; //Perform more tests changing it
_errors = 0;
IFifoQueue<object> queue = new FastFifoQueue<object>(2048);
Thread.CurrentThread.Priority = ThreadPriority.AboveNormal;
Thread[] producers = new Thread[size], consumers = new Thread[size];
for (int i = 0; i < size; i++)
{
producers[i] = new Thread(LoopProducer) { Priority = ThreadPriority.BelowNormal };
consumers[i] = new Thread(LoopConsumer) { Priority = ThreadPriority.BelowNormal };
producers[i].Start(queue);
consumers[i].Start(queue);
}
Thread.Sleep(new TimeSpan(0, 0, 1, 0));
for (int i = 0; i < size; i++)
{
producers[i].Abort();
consumers[i].Abort();
}
Assert.AreEqual(0, _errors);
}
private void LoopProducer(object queue)
{
try
{
IFifoQueue<object> q = (IFifoQueue<object>)queue;
while (true)
{
try
{
q.Enqueue(new object());
}
catch
{ }
}
}
catch (ThreadAbortException)
{ }
}
private void LoopConsumer(object queue)
{
try
{
IFifoQueue<object> q = (IFifoQueue<object>)queue;
while (true)
{
object item = q.Dequeue();
if (item == null) Interlocked.Increment(ref _errors);
}
}
catch (ThreadAbortException)
{ }
}
Once a null is got by the consumer thread, an error is counted.
When performing the test with 1 producer and 1 consumer, it succeeds. When performing the test with 2 producers and 2 consumers, or more, a disaster happens: even 2000 leaks are detected. I found that the problem can be in the Enqueue method. By design contract, a producer can write only into a cell that is empty (null), but modifying my code with some diagnostics I found that sometimes a producer is trying to write on a non-empty cell, which is then occupied by "good" data.
public void Enqueue(T item)
{
_writeSema.WaitOne();
int index = Interlocked.Increment(ref _head);
index %= _capacity;
if (index < 0) index += _capacity;
//_array[index] = item;
T leak = Interlocked.Exchange(ref _array[index], item);
//Diagnostic code
if (leak != null)
{
throw new InvalidOperationException("Too bad...");
}
Interlocked.Increment(ref _count);
_readSema.Release();
}
The "too bad" exception happens then often. But it's too strange that a conflict raises from concurrent writes, because increments are atomic and writer's semaphore allows only as many writers as the free array cells.
Can somebody help me with that? I would really appreciate if you share your skills and experience with me.
Thank you.
I must say, this struck me as a very clever idea, and I thought about it for a while before I started to realize where (I think) the bug is here. So, on one hand, kudos on coming up with such a clever design! But, at the same time, shame on you for demonstrating "Kernighan's Law":
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
The issue is basically this: you are assuming that the WaitOne and Release calls effectively serialize all of your Enqueue and Dequeue operations; but that isn't quite what is going on here. Remember that the Semaphore class is used to restrict the number of threads accessing a resource, not to ensure a particular order of events. What happens between each WaitOne and Release is not guaranteed to occur in the same "thread-order" as the WaitOne and Release calls themselves.
This is tricky to explain in words, so let me try to provide a visual illustration.
Let's say your queue has a capacity of 8 and looks like this (let 0 represent null and x represent an object):
[ x x x x x x x x ]
So Enqueue has been called 8 times and the queue is full. Therefore your _writeSema semaphore will block on WaitOne, and your _readSema semaphore will return immediately on WaitOne.
Now let's suppose Dequeue is called more or less concurrently on 3 different threads. Let's call these T1, T2, and T3.
Before proceeding let me apply some labels to your Dequeue implementation, for reference:
public T Dequeue()
{
_readSema.WaitOne(); // A
int index = Interlocked.Increment(ref _tail); // B
index %= _capacity;
if (index < 0) index += _capacity;
T ret = Interlocked.Exchange(ref _array[index], null); // C
Interlocked.Decrement(ref _count);
_writeSema.Release(); // D
return ret;
}
OK, so T1, T2, and T3 have all gotten past point A. Then for simplicity let's suppose they each reach line B "in order", so that T1 has an index of 0, T2 has an index of 1, and T3 has an index of 2.
So far so good. But here's the gotcha: there is no guarantee that from here, T1, T2, and T3 are going to get to line D in any specified order. Suppose T3 actually gets ahead of T1 and T2, moving past line C (and thus setting _array[2] to null) and all the way to line D.
After this point, _writeSema will be signaled, meaning you have one slot available in your queue to write to, right? But your queue now looks like this!
[ x x 0 x x x x x ]
So if another thread has come along in the meantime with a call to Enqueue, it will actually get past _writeSema.WaitOne, increment _head, and get an index of 0, even though slot 0 is not empty. The result of this will be that the item in slot 0 could actually be overwritten, before T1 (remember him?) reads it.
To understand where your null values are coming from, you need only to visualize the reverse of the process I just described. That is, suppose your queue looks like this:
[ 0 0 0 0 0 0 0 0 ]
Three threads, T1, T2, and T3, all call Enqueue nearly simultaneously. T3 increments _head last but inserts its item (at _array[2]) and calls _readSema.Release first, resulting in a signaled _readSema but a queue looking like:
[ 0 0 x 0 0 0 0 0 ]
So if another thread has come along in the meantime with a call to Dequeue (before T1 and T2 are finished doing their thing), it will get past _readSema.WaitOne, increment _tail, and get an index of 0, even though slot 0 is empty.
So there's your problem. As for a solution, I don't have any suggestions at the moment. Give me some time to think it over... (I'm posting this answer now because it's fresh in my mind and I feel it might help you.)
(+1 to Dan Tao who I vote has the answer)
The enqueue would be changed to something like this...
while (Interlocked.CompareExchange(ref _array[index], item, null) != null)
;
The dequeue would be changed to something like this...
while( (ret = Interlocked.Exchange(ref _array[index], null)) == null)
;
This builds upon Dan Tao's excellent analysis. Because the indexes are atomically obtained, then (assuming that no threads die or terminate in the enqueue or dequeue methods) a reader is guaranteed to eventually have his cell filled in, or the writer is guaranteed to eventually have his cell freed (null).
Thank you Dan Tao and Les,
I really appreciated your help a lot. Dan, you opened my mind: it's not important how many producers/consumers are inside the critical section, the important is that the locks are released in order. Les, you found the solution to the problem.
Now it's time to finally answer my own question with the final code I made thanks to the help of both of you. Well, it's not much but it's a little enhancement from Les's code
Enqueue:
while (Interlocked.CompareExchange(ref _array[index], item, null) != null)
Thread.Sleep(0);
Dequeue:
while ((ret = Interlocked.Exchange(ref _array[index], null)) == null)
Thread.Sleep(0);
Why Thread.Sleep(0)? When we find that an element cannot be retrieved/stored, why immediately checking again? I need to force context switch to allow other threads to read/write. Obviously, the next thread that will be scheduled could be another thread unable to operate, but at least we force it. Source: http://progfeatures.blogspot.com/2009/05/how-to-force-thread-to-perform-context.html
I also tested the code of the previous test case to get proof of my claims:
without sleep(0)
Read 6164150 elements
Wrote 6322541 elements
Read 5885192 elements
Wrote 5785144 elements
Wrote 6439924 elements
Read 6497471 elements
with sleep(0)
Wrote 7135907 elements
Read 6361996 elements
Wrote 6761158 elements
Read 6203202 elements
Wrote 5257581 elements
Read 6587568 elements
I know this is not a "great" discover and I will wiln no Turing prize for these numbers. Performance increment is not dramatical, but is greater than zero. Forcing context switch allows more RW operations to be performed (=higher throughput).
To be clear: in my test, I merely evaluate the performance of the queue, not simulate a producer/consumer problem, so don't care if at the end of the test after a minute there are still elements in queue. But I just demonstrated my approach works, thanks to you all.
Code available open source as MS-RL: http://logbus-ng.svn.sourceforge.net/viewvc/logbus-ng/trunk/logbus-core/It.Unina.Dis.Logbus/Utils/FastFifoQueue.cs?revision=461&view=markup

Categories