How can I de-reference an object in C#? - c#

I have a buffer that cycles between two indices and I want to write out the object at the current index in a task and allow the rest of the program to continue processing things. I have attempted to simplify the process while maintaining all the pertinent parts.
object[] buffer = new object[2]
int currentIndex = 0
while(true){
buffer[currentIndex].field1 = newdatahere //data grabbed by sensor bundle
buffer[currentIndex].field2 = newdatahere //data grabbed by camera bundle
buffer[currentIndex].field3 = newdatahere //data grabbed from system snapshot
task.factory.starnew(()=>{
writeOutObject(buffer[currentIndex])
}
buffer[currentIndex] = new object();
currentIndex = 1 - currentIndex //cycle between the 0 and 1 indices
}
void writeOutObject(Object obj){
//do file IO here
//write out field1, field2, field3
}
The problem is that by assigning the buffer item to a new object I am killing the writeOutObject method because the obj no longer exists by the time the task runs. I want to be able to keep the old object until it is written out and have the buffer point to a new object.
What I want to do:
object obj1 = new object();
obj1.field1 = data1;
obj1.field2 = data2;
obj1.field3 = data3;
obj2 = obj1;
//de-reference obj1 from the object that it was pointed to and associate it to a new object
// i want this to write out data1,data2,data3 but instead it is
// writing out data4,data5,data6 or some mixture because it has
// been overwritten halfway through the file IO
task.factory.startnew(()=>{ write out obj2 }
obj1.field1 = data4;
obj1.field2 = data5;
obj1.field3 = data6;
Maybe something like:
obj1 = new object()
obj2* = &obj1
obj1* = &new object
I need to break the reference of obj1 back to obj2 once it has been assigned. Simply doing this won't work:
obj1 = new object()
obj2 = obj1
obj1 = null // or new object()
As requested, "The Real Code"
CancellationTokenSource cts = new CancellationTokenSource();
public void StartMachine()
{
Task.Factory.StartNew(() =>
{
_isFirstData = true;
_expiredFlag = false;
Plc.StartPLC();
Plc.Start();
while (true)
{
if (!_paused && !Plc.IsInputStackEmpty() && !Plc.IsOutputSlideOpen())
{
CameraFront.SnapAquire();
// If this is the first data set the wait handles
if (!_isFirstData)
{
CameraBack.SnapAquire();
}
else
{
_imageBackRecieved.Set();
_databaseInfoRecieved.Set();
//_isFirstCard = false;
}
// Wait for 3 things! Image Front, Image Back, Database
bool gotEvents = WaitHandle.WaitAll(_waitHandles, TIMEOUT);
if (gotEvents)
{
if (!_isFirstData)
{
if (Buffer[1 - NextDataOutIndex].IsDataComplete())
{
if (Buffer[1 - NextDataOutIndex].EvaluateData())
{
OnPassFailNotification()
Plc.Pass();
}
else
{
OnPassFailNotification()
Plc.Fail();
}
}
else
{
OnPassFailNotification()
Plc.Fail();
Common.Logging
}
}
else
{
_isFirstData = false;
}
}
else
{
Common.Logging("WARNING: Wait handle timed out"
Plc.Fail();
}
Data temp = Buffer[1 - NextDataOutIndex];
Task.Factory.Startnew(()=>{
Data.WriteData(temp);
}
Buffer[1 - NextDataOutIndex] = new Data();
// Swap card buffers - alternate between 1 and 0
NextdataOutIndex = 1 - NextDataOutIndex;
// Do this
Plc.WheelAdvance();
}
else
{
}
}
}, cts.Token);
}
public static void WriteData(Data data)
{
if(WRITE_BATCH_FILES)
try
{
if (data.ImageFront != null)
{
string filenameforfront = "blahlbah-front.tiff";
OperatorSet.WriteImage(data.ImageFront, "tiff", 0, filenameforfront);
}
if (data.ImageBack != null)
{
string filenameforback = "blahblah-back.tiff";
HOperatorSet.WriteImage(data.ImageBack, "tiff", 0, filenameforback);
}
}
catch (Exception ex)
{
Common.Logging.
//throw ex;
}
//TODO: Write out data in xml
//TODO: Write out metrics
}

just before you task.factory.StartNew do the following
while(...)
{
... bunch of other code
buildTask(buffer[currentIndex]);
buffer[currentIndex] = new object();
... bunch of other code
}
// Within this method, all references to detachedBuffer object will remain pointing to the same
// memory location no matter whether the variable passed in is reassigned.
public void buildTask(object detachedBuffer)
{
task.factory.starnew(()=>{
writeOutObject(detachedBuffer);
};
}

Sounds like a job for Semaphores!
Semaphores are a form of inter-thread communication that are ideal for this situation as they allow one thread to lock the semaphore but another to release it again. In the code sample below, the sem.WaitOne() line will wait until the sem.Release() method has been called. This blocks your main thread for just long enough that your task gets hold of the data it needs.
object[] buffer = new object[2]
int currentIndex = 0
while(true){
buffer(currentIndex).field1 = newdatahere //data grabbed by sensor bundle
buffer(currentIndex).field2 = newdatahere //data grabbed by camera bundle
buffer(currentIndex).field3 = newdatahere //data grabbed from system snapshot
Semaphore sem = new Semaphore(1,1); //Initialise the semaphore so that it is checked out
task.factory.starnew(()=>{
object item = buffer[currentIndex]; //Create local reference to the data item
sem.Release(); //Check-in the semaphore (let the WaitOne method return)
writeOutObject(item)
}
sem.WaitOne(); //Block until the semaphore has returned
buffer[currentIndex] = new object();
currentIndex = 1 - currentIndex //cycle between the 0 and 1 indices
}
void writeOutObject(Object obj){
//do file IO here
//write out field1, field2, field3
}

Related

Thread-safe buffer that propagates the latest data

I have a data source which creates (produces) a PointF every 15 to 20 milliseconds.
I need to store (consume) such points every 10ms. My approach is to use a 3 points wide buffer and pointers to achieve a lock-free access:
protected class PosBuffer
{
PointF[] m_Buffer = new PointF[3];
volatile int m_ReadPointer = 0;
volatile int m_WritePointer = 1;
internal PosBuffer()
{
m_Buffer[0] = new PointF(0, 0);
m_Buffer[1] = new PointF(0, 0);
m_Buffer[2] = new PointF(0, 0);
}
internal void Write(PointF point)
{
m_Buffer[m_WritePointer] = point;
m_ReadPointer++;
if (m_ReadPointer == 3) m_ReadPointer = 0;
m_WritePointer++;
if (m_WritePointer == 3) m_WritePointer = 0;
}
internal PointF Read()
{
return m_Buffer[m_ReadPointer];
}
}
My idea is that
as soon as new data arrives it will be stored 'above' the old data. Then the read pointer is set to this position and then the write pointer is incremented.
in case now new data has been produced the consumer thread reads the old data again.
This construction allows different or inconstant read and write rates.
My questions are:
would this approach work?
Do I need locks/monitors/critical sections...
Would I need to disable optimization?
Are there known better solutions?
Try running this code:
async Task Main()
{
var cts = new CancellationTokenSource();
var ct = cts.Token;
var pb = new PosBuffer();
var tw = Task.Run(() =>
{
while (true)
{
if (ct.IsCancellationRequested)
break;
pb.Write(new PointF());
}
});
var tr = Task.Run(() =>
{
while (true)
{
if (ct.IsCancellationRequested)
break;
pb.Read();
}
});
await Task.Delay(TimeSpan.FromSeconds(5.0));
cts.Cancel();
}
Fairly quickly it throws IndexOutOfRangeException. You're letting the value of the "pointers" (bad name by the way) be 3 before dropping back to zero and in the time it takes to change it down the read operation throws.
The problem goes away if you increment like this:
m_ReadPointer = (m_ReadPointer == m_Buffer.Length - 1) ? 0 : m_ReadPointer + 1;
m_WritePointer = (m_WritePointer == m_Buffer.Length - 1) ? 0 : m_WritePointer + 1;
Now, if you have multiple writers then you're definitely going to need locking.
You could consider using a BroadcastBlock<T> from the TPL Dataflow library:
Provides a buffer for storing at most one element at time, overwriting each message with the next as it arrives.
using System.Threading.Tasks.Dataflow;
// Initialize
BroadcastBlock<PointF> block = new(x => x);
// Write the latest value
block.Post(new PointF(0, 0));
// Fetch the latest value
PointF point = await block.ReceiveAsync();
Another idea is to use a BehaviorSubject<T> from the Rx library.
Represents a value that changes over time.
using System.Reactive.Subjects;
// Initialize
BehaviorSubject<PointF> subject = new(new PointF(0, 0));
// Write the latest value
subject.OnNext(new PointF(0, 0));
// Get the latest value
PointF point = subject.Value;
Both classes (BroadcastBlock<T> and BehaviorSubject<T>) are thread-safe.
For your case of single value, I would suggest ReaderWriterLockSlim assuming
you need thread safe reads and writes with multiple threads.
protected class PosBuffer
{
private PointF m_Buffer;
private ReaderWriterLockSlim m_Lock = new();
internal void Write(PointF point)
{
m_Lock.EnterWriteLock();
try
{
m_Buffer = point;
}
finally
{
m_Lock.ExitWriteLock();
}
}
internal PointF Read()
{
m_Lock.EnterReadLock();
try
{
return m_Buffer;
}
finally
{
m_Lock.ExitReadLock();
}
}
}

Fill List<object> in a new thread and get it back to the UI Thread

I got following function generating a List of LinesVisual3D objects. I need to do this in a new task, because it's very time consuming.
public async Task<List<LinesVisual3D>> Create2dGcodeLayerModelListAsync(IProgress<int> prog = null)
{
var list = new List<LinesVisual3D>();
try
{
await Task.Factory.StartNew(delegate ()
{
var temp = new List<LinesVisual3D>();
int i = 0;
foreach (List<GCodeCommand> commands in Model.Commands)
{
var line = drawLayer(i, 0, Model.Commands[i].Count, false);
/*
if (DispatcherObject.Thread != Thread.CurrentThread)
DispatcherObject.BeginInvoke(new Action(() => list.Add(drawLayer(i, 0, Model.Commands[i].Count, false))));
*/
temp.Add(drawLayer(i, 0, Model.Commands[i].Count, false));
if (prog != null)
{
float test = (((float)i / Model.Commands.Count) * 100f);
if (i < Model.Commands.Count - 1)
prog.Report(Convert.ToInt32(test));
else
prog.Report(100);
}
i++;
}
list = new List<LinesVisual3D>(temp);
});
LayerModelGenerated = true;
return list;
}
catch (Exception exc)
{
return list;
}
}
It actually works fine, however I do not get the list out of the background thread. When I access the list in the UI (different thread), I'll get this result:
I know that the problem is, that the list was filled / generated in a different thread (in this case, ThreadId = 3). However the UI is running in ThreadId = 1
I already tried to invoke directly after the loop has finished.
//DispatcherObject = Dispatcher.CurrentDispatcher;
if (DispatcherObject.Thread != Thread.CurrentThread)
DispatcherObject.BeginInvoke(new Action(() => list = new List<LinesVisual3D>(temp)));
else
list = new List<LinesVisual3D>(temp);
I also tried to invoke while adding to the list.
if (DispatcherObject.Thread != Thread.CurrentThread)
DispatcherObject.BeginInvoke(new Action(() => list.Add(drawLayer(i, 0, Model.Commands[i].Count, false))));
The result always was the same.
EDIT1:
Tried with single instance instead of list.
public async Task<LinesVisual3D> Create2dGcodeLayerAsync(IProgress<int> prog = null)
{
var temp = new LinesVisual3D();
try
{
await Task.Factory.StartNew(delegate ()
{
if (DispatcherObject.Thread != Thread.CurrentThread)
DispatcherObject.BeginInvoke(new Action(() => temp = drawLayer(3, 0, Model.Commands[3].Count, false)));
//temp = drawLayer(3, 0, Model.Commands[3].Count, false);
});
LayerModelGenerated = true;
return temp;
}
catch (Exception exc)
{
return temp;
}
}
This seems to work. I guess becuase the temp object is generated outside the new task, however the object line in the task?
EDIT2:
This function works, however freezes the UI...
public async Task<List<LinesVisual3D>> Create2dGcodeLayerModelListAsync(IProgress<int> prog = null)
{
var list = new List<LinesVisual3D>();
var line = new LinesVisual3D();
try
{
await Task.Factory.StartNew(delegate ()
{
var temp = new List<LinesVisual3D>();
int i = 0;
foreach (List<GCodeCommand> commands in Model.Commands)
{
// Freezes the UI...
if (DispatcherObject.Thread != Thread.CurrentThread)
{
DispatcherObject.Invoke(new Action(() =>
{
line = drawLayer(i, 0, Model.Commands[i].Count, false);
list.Add(line);
}));
}
if (prog != null)
{
float test = (((float)i / Model.Commands.Count) * 100f);
if (i < Model.Commands.Count - 1)
prog.Report(Convert.ToInt32(test));
else
prog.Report(100);
}
i++;
}
});
LayerModelGenerated = true;
return list;
}
catch (Exception exc)
{
return list;
}
}
Either it works and freezes the UI, or it leaves the list in the old thread and doesn't freeze the UI :(
Solution:
Instead of creating the LinesVisual3D object in the loop, I just create a List of Point3D and create a new LinesVisual3D at the Invoke.
public async Task<List<LinesVisual3D>> Create2dGcodeLayerModelListAsync(IProgress<int> prog = null)
{
var list = new List<LinesVisual3D>();
var line = new LinesVisual3D();
try
{
await Task.Factory.StartNew(delegate ()
{
var temp = new List<LinesVisual3D>();
int i = 0;
foreach (List<GCodeCommand> commands in Model.Commands)
{
var pointsPerLayer = getLayerPointsCollection(i, 0, Model.Commands[i].Count, false);
if (DispatcherObject.Thread != Thread.CurrentThread)
{
DispatcherObject.Invoke(new Action(() =>
{
line = new LinesVisual3D() { Points = new Point3DCollection(pointsPerLayer)};
list.Add(line);
}));
}
if (prog != null)
{
float test = (((float)i / Model.Commands.Count) * 100f);
if (i < Model.Commands.Count - 1)
prog.Report(Convert.ToInt32(test));
else
prog.Report(100);
}
i++;
}
});
LayerModelGenerated = true;
return list;
}
catch (Exception exc)
{
return list;
}
}
Points:
private List<Point3D> getLayerPointsCollection(int layerNumber, int fromProgress, int toProgress, bool isNextLayer)
{
...
}
I do not get the list out of the background thread. When I access the list in the UI (different thread), I'll get this result:
You get the list out of the background thread, what causes problems is accessing the properties of the individual list items.
Most likely, those items are ui objects themselves and have to be created on the ui thread. So you have to dispatch each new back to the ui thread, which essentially leaves adding them to the list as only job for the background task, which will hardly be cpu-bound, so just drop the background thread.
at a guess I'd say the initial call to Create2dGcodeLayerModelListAsync returns the first empty instance of list, when the inner thread starts it also creates a new instance of list in fact two new instances as the last line initialises a new instance with temp, the population of these instances are never returned from Create2dGcodeLayerModelListAsync.
try replace
temp.Add(drawLayer(i, 0, Model.Commands[i].Count, false));
with
list.Add(drawLayer(i, 0, Model.Commands[i].Count, false));
and delete
list = new List<LinesVisual3D>(temp);
As the call to Create2dGcodeLayerModelListAsync is async (await ..) it will spin up a worker thread anyway the inner thread is redundant and waists time spinning up yet another thread. but start by getting rid of the news first.

How do I fetch get request results one by one in Unity3D?

I'm using UnityWebRequest to fetch a JSON array of search results from an API. Instead of yielding until the whole array is returned, can I make it so that my code deals with the results one by one?
Any directions are appreciated.
Here's some code:
public IEnumerator GetMovies(string q)
{
string uri = "http://www.omdbapi.com/?apikey=__&s=" + q;
using (UnityWebRequest r = UnityWebRequest.Get(uri))
{
yield return r.SendWebRequest();
if(r.isHttpError || r.isNetworkError)
{
Debug.Log(r.error);
}
else
{
SearchInfo info = JsonUtility.FromJson<SearchInfo>(r.downloadHandler.text);
if(info != null)
{
gameObject.GetComponent<EventManager>().onSearchInfoGet(info);
}
}
}
}
Put it on a thread
JsonUtility.FromJson
The versions of this method that take strings can be called from background threads.
So you could try and simply do something like
// A thread save queue
ConcurrentQueue<SearchInfo> callbacks = new ConcurrentQueue<SearchInfo>();
public IEnumerator GetMovies(string q)
{
var uri = "http://www.omdbapi.com/?apikey=__&s=" + q;
using (var r = UnityWebRequest.Get(uri))
{
yield return r.SendWebRequest();
if (r.isHttpError || r.isNetworkError)
{
Debug.Log(r.error);
}
else
{
// start the deserialization task in a background thread
// and pass in the returned string
Thread thread = new Thread(new ParameterizedThreadStart(DeserializeAsync));
thread.Start(r.downloadHandler.text);
// wait until the thread writes a result to the queue
yield return new WaitUntil(()=> !callbacks.IsEmpty);
// read the first entry in the queue and remove it at the same time
if (callbacks.TryDequeue(out var result))
{
GetComponent<EventManager>().onSearchInfoGet(result);
}
}
}
}
// This happens in a background thread!
private void DeserializeAsync(object json)
{
// now it shouldn't matter how long this takes
var info = JsonUtility.FromJson<SearchInfo>((string)json);
// dispatch the result back to the main thread
callbacks.Enqueue(info);
}
Maybe there are more efficient ways for dispatching a single data event back to the main thread then a queue ... but at least I can say the ConcurrentQueue is thread save ;)
Alternative (Maybe)
Instead of using JsonUtility you could use e.g. SimpleJSON you only need to create a c# script with the content of SimpleJSON.cs somewhere in your Assets.
assuming a JSON like
{
"Search" : [
{
"Title":"AB",
"Year":"1999",
"imdbID":"abcdefg",
"Type":"AB"
},
{
"Title":"IJ",
"Year":"2000",
"imdbID":"abcdefg",
"Type":"IJ"
},
{
"Title":"XY",
"Year":"2001",
"imdbID":"abcdefg",
"Type":"XY"
}
]
}
and your SearchInfo like
// This you might not need anymore
[System.Serializable]
public class SearchInfo
{
public MovieSearchInfo[] Search;
}
[System.Serializable]
public class MovieSearchInfo
{
public string Title;
public string Year;
public string imdbID;
public string Type;
}
Then you could use it in order to parse the classes "manualy" like e.g.
// Give your IEnumerator a callback parameter
public IEnumerator GetMovies(string q, Action<JSONArray> OnSuccess = null)
{
var uri = "http://www.omdbapi.com/?apikey=__&s=" + q;
using (var r = UnityWebRequest.Get(uri))
{
yield return r.SendWebRequest();
if (r.isHttpError || r.isNetworkError)
{
Debug.Log(r.error);
}
else
{
// Not sure though if this call is faster then
// simply using JsonUtility ...
var N = JSON.Parse(r.downloadHandler.text);
var theJsonArray = N["Search"].Values;
// Now depending on what you actually mean by one-by-one
// you could e.g. handle only one MoveInfo per frame like
foreach (var item in theJsonArray)
{
var movieInfo = new MovieSearchInfo();
movieInfo.Title = item["title"];
movieInfo.Year = item["Year"];
movieInfo.imdbID = item["imdbID"];
movieInfo.Type = item["Type"];
// NOW DO SOMETHING WITH IT
// wait for the next frame to continue
yield return null;
}
}
}
}
Otherwise you could also checkout other JSON libraries.

c# - Waiting for 1 of 2 threads to be finished

I have a place in my code, that I need to wait for a either finger to be identified on a sensor, or the user pressed a key to abort this action and return to the main menu.
I tried using something like conditional variables with Monitor and lock concepts but when I try to alert the main thread, nothing happens.
CODE:
private static object _syncFinger = new object(); // used for syncing
private static bool AttemptIdentify()
{
// waiting for either the user cancels or a finger is inserted
lock (_syncFinger)
{
Thread tEscape = new Thread(new ThreadStart(HandleIdentifyEscape));
Thread tIdentify = new Thread(new ThreadStart(HandleIdentify));
tEscape.IsBackground = false;
tIdentify.IsBackground = false;
tEscape.Start();
tIdentify.Start();
Monitor.Wait(_syncFinger); // -> Wait part
}
// Checking the change in the locked object
if (_syncFinger is FingerData) // checking for identity found
{
Console.WriteLine("Identity: {0}", ((FingerData)_syncFinger).Guid.ToString());
}
else if(!(_syncFinger is Char)) // char - pressed a key to return
{
return false; // returns with no error
}
return true;
}
private static void HandleIdentifyEscape()
{
do
{
Console.Write("Enter 'c' to cancel: ");
} while (Console.ReadKey().Key != ConsoleKey.C);
_syncFinger = new Char();
LockNotify((object)_syncFinger);
}
private static void HandleIdentify()
{
WinBioIdentity temp = null;
do
{
Console.WriteLine("Enter your finger.");
try // trying to indentify
{
temp = Fingerprint.Identify(); // returns FingerData type
}
catch (Exception ex)
{
Console.WriteLine("ERROR: " + ex.Message);
}
// if couldn't identify, temp would stay null
if(temp == null)
{
Console.Write("Invalid, ");
}
} while (temp == null);
_syncFinger = temp;
LockNotify(_syncFinger);
}
private static void LockNotify(object syncObject)
{
lock(syncObject)
{
Monitor.Pulse(syncObject);
}
}
when i try to alert the main thread, nothing happens.
That's because the main thread is waiting on the monitor for the object created here:
private static object _syncFinger = new object(); // used for syncing
But each of your threads replaces that object value, and then signals the monitor for the new object. The main thread has no knowledge of the new object, and so of course signaling the monitor for that new object will have no effect on the main thread.
First, any time you create an object for the purpose of using with lock, make it readonly:
private static readonly object _syncFinger = new object(); // used for syncing
It's always the right thing to do , and that will prevent you from ever making the mistake of changing the monitored object while a thread is waiting on it.
Next, create a separate field to hold the WinBioIdentity value, e.g.:
private static WinBioIdentity _syncIdentity;
And use that to relay the result back to the main thread:
private static bool AttemptIdentify()
{
// waiting for either the user cancels or a finger is inserted
lock (_syncFinger)
{
_syncIdentity = null;
Thread tEscape = new Thread(new ThreadStart(HandleIdentifyEscape));
Thread tIdentify = new Thread(new ThreadStart(HandleIdentify));
tEscape.IsBackground = false;
tIdentify.IsBackground = false;
tEscape.Start();
tIdentify.Start();
Monitor.Wait(_syncFinger); // -> Wait part
}
// Checking the change in the locked object
if (_syncIdentity != null) // checking for identity found
{
Console.WriteLine("Identity: {0}", ((FingerData)_syncIdentity).Guid.ToString());
return true;
}
return false; // returns with no error
}
private static void HandleIdentifyEscape()
{
do
{
Console.Write("Enter 'c' to cancel: ");
} while (Console.ReadKey().Key != ConsoleKey.C);
LockNotify((object)_syncFinger);
}
private static void HandleIdentify()
{
WinBioIdentity temp = null;
do
{
Console.WriteLine("Enter your finger.");
try // trying to indentify
{
temp = Fingerprint.Identify(); // returns FingerData type
}
catch (Exception ex)
{
Console.WriteLine("ERROR: " + ex.Message);
}
// if couldn't identify, temp would stay null
if(temp == null)
{
Console.Write("Invalid, ");
}
} while (temp == null);
__syncIdentity = temp;
LockNotify(_syncFinger);
}
All that said, you should prefer to use the modern async/await idiom for this:
private static bool AttemptIdentify()
{
Task<WinBioIdentity> fingerTask = Task.Run(HandleIdentify);
Task cancelTask = Task.Run(HandleIdentifyEscape);
if (Task.WaitAny(fingerTask, cancelTask) == 0)
{
Console.WriteLine("Identity: {0}", fingerTask.Result.Guid);
return true;
}
return false;
}
private static void HandleIdentifyEscape()
{
do
{
Console.Write("Enter 'c' to cancel: ");
} while (Console.ReadKey().Key != ConsoleKey.C);
}
private static WinBioIdentity HandleIdentify()
{
WinBioIdentity temp = null;
do
{
Console.WriteLine("Enter your finger.");
try // trying to indentify
{
temp = Fingerprint.Identify(); // returns FingerData type
}
catch (Exception ex)
{
Console.WriteLine("ERROR: " + ex.Message);
}
// if couldn't identify, temp would stay null
if(temp == null)
{
Console.Write("Invalid, ");
}
} while (temp == null);
return temp;
}
The above is a bare-minimum example. It would be better to make the AttemptIdentify() method async itself, and then use await Task.WhenAny() instead of Task.WaitAny(). It would also be better to include some mechanism to interrupt the tasks, i.e. once one has completed, you should want to interrupt the other so it's not lying around continuing to attempt its work.
But those kinds of issues are not unique to the async/await version, and don't need to be solved to improve on the code you have now.

What is the best scenario for one fast producer multiple slow consumers?

I'm looking for the best scenario to implement one producer multiple consumer multithreaded application.
Currently I'm using one queue for shared buffer but it's much slower than the case of one producer one consumer.
I'm planning to do it like this:
Queue<item>[] buffs = new Queue<item>[N];
object[] _locks = new object[N];
static void Produce()
{
int curIndex = 0;
while(true)
{
// Produce item;
lock(_locks[curIndex])
{
buffs[curIndex].Enqueue(curItem);
Monitor.Pulse(_locks[curIndex]);
}
curIndex = (curIndex+1)%N;
}
}
static void Consume(int myIndex)
{
item curItem;
while(true)
{
lock(_locks[myIndex])
{
while(buffs[myIndex].Count == 0)
Monitor.Wait(_locks[myIndex]);
curItem = buffs[myIndex].Dequeue();
}
// Consume item;
}
}
static void main()
{
int N = 100;
Thread[] consumers = new Thread[N];
for(int i = 0; i < N; i++)
{
consumers[i] = new Thread(Consume);
consumers[i].Start(i);
}
Thread producer = new Thread(Produce);
producer.Start();
}
Use a BlockingCollection
BlockingCollection<item> _buffer = new BlockingCollection<item>();
static void Produce()
{
while(true)
{
// Produce item;
_buffer.Add(curItem);
}
// eventually stop producing
_buffer.CompleteAdding();
}
static void Consume(int myIndex)
{
foreach (var curItem in _buffer.GetConsumingEnumerable())
{
// Consume item;
}
}
static void main()
{
int N = 100;
Thread[] consumers = new Thread[N];
for(int i = 0; i < N; i++)
{
consumers[i] = new Thread(Consume);
consumers[i].Start(i);
}
Thread producer = new Thread(Produce);
producer.Start();
}
If you don't want to specify number of threads from start you can use Parallel.ForEach instead.
static void Consume(item curItem)
{
// consume item
}
void Main()
{
Thread producer = new Thread(Produce);
producer.Start();
Parallel.ForEach(_buffer.GetConsumingPartitioner(), Consumer)
}
Using more threads won't help. It may even reduce performance. I suggest you try to use ThreadPool where every work item is one item created by the producer. However, that doesn't guarantee the produced items to be consumed in the order they were produced.
Another way could be to reduce the number of consumers to 4, for example and modify the way they work as follows:
The producer adds the new work to the queue. There's only one global queue for all worker threads. It then sets a flag to indicate there is new work like this:
ManualResetEvent workPresent = new ManualResetEvent(false);
Queue<item> workQueue = new Queue<item>();
static void Produce()
{
while(true)
{
// Produce item;
lock(workQueue)
{
workQueue.Enqueue(newItem);
workPresent.Set();
}
}
}
The consumers wait for work to be added to the queue. Only one consumer will get to do its job. It then takes all the work from the queue and resets the flag. The producer will not be able to add new work until that is done.
static void Consume()
{
while(true)
{
if (WaitHandle.WaitOne(workPresent))
{
workPresent.Reset();
Queue<item> localWorkQueue = new Queue<item>();
lock(workQueue)
{
while (workQueue.Count > 0)
localWorkQueue.Enqueue(workQueue.Dequeue());
}
// Handle items in local work queue
...
}
}
}
That outcome of this, however, is a bit unpredictable. It could be that one thread is doing all the work and the others do nothing.
I don't see why you have to use multiple queues. Just reduce the amount of locking. Here is an sample where you can have a large number of consumers and they all wait for new work.
public class MyWorkGenerator
{
ConcurrentQueue<object> _queuedItems = new ConcurrentQueue<object>();
private object _lock = new object();
public void Produce()
{
while (true)
{
_queuedItems.Enqueue(new object());
Monitor.Pulse(_lock);
}
}
public object Consume(TimeSpan maxWaitTime)
{
if (!Monitor.Wait(_lock, maxWaitTime))
return null;
object workItem;
if (_queuedItems.TryDequeue(out workItem))
{
return workItem;
}
return null;
}
}
Do note that Pulse() will only trigger one consumer at a time.
Example usage:
static void main()
{
var generator = new MyWorkGenerator();
var consumers = new Thread[20];
for (int i = 0; i < consumers.Length; i++)
{
consumers[i] = new Thread(DoWork);
consumers[i].Start(generator);
}
generator.Produce();
}
public static void DoWork(object state)
{
var generator = (MyWorkGenerator) state;
var workItem = generator.Consume(TimeSpan.FromHours(1));
while (workItem != null)
{
// do work
workItem = generator.Consume(TimeSpan.FromHours(1));
}
}
Note that the actual queue is hidden in the producer as it's imho an implementation detail. The consumers doesn't really have to know how the work items are generated.

Categories