Keep the UI thread responsive? - c#

My application is using WPF Control to display many video from Network Cameras.
The summary code is below
public void DisPlayVideoThreadProc()
{
while (isDisplayThreadRunning)
{
Global.mainWindow.Dispatcher.Invoke(new Action(delegate()
{
for (int i = 0; i < numOfCamera; i++)
{
BitmapSource img = bitmapQueue[i].Serve(); //Pop the frame from Queue
ControlDisplay[i].DrawImage(img); //Draw this frame on ControlDisplay[i]
}
}));
}
}
I encountered an problem when the amount of camera is large (> 15 camera), then the UI thread is very slow for user's interaction.
I know the UI thread works heavy when displaying many camera videos. But I don't know how to improve it. Someone can tell me, how to fix this issue.
Many thanks!

Don't draw all camera's in one invoke. This will block the gui thread too long. You'd better call invoke per camera draw. Or atleast in batches of maximum 4.
You might bring the Serve() method out of the invoke and store it in a dictionary and update it with a DispatcherTimer.
PSEUDO:
// hold the camera images.
public class CameraImage
{
public bool Updated {get; set; }
public BitmapSource Image {get; set; }
}
// cache
private Dictionary<int, CameraImage> _cameraCache = new Dictionary<int, CameraImage>();
// thread method to get the images.
while (isDisplayThreadRunning)
{
for (int i = 0; i < numOfCamera; i++)
{
BitmapSource img = bitmapQueue[i].Serve(); //Pop the frame from Queue
lock(_cameraCache)
{
CameraImage currentCameraImage;
if(!_cameraCache.TryGetValue(i, out currentCameraImage))
{
_cameraCache.Add(i, currentCameraImage = new CameraImage());
}
currentCameraImage.Image = img;
currentCameraImage.Updated = true;
}
}
}
// index cycler
private int _index;
// display timer.
public void DispatcherTimeMethod()
{
lock(_cameraCache)
{
CameraImage currentCameraImage;
if(_cameraCache.TryGetValue(_index, out currentCameraImage))
if(currentCameraImage.Updated)
{
ControlDisplay[_index].DrawImage(currentCameraImage.Image);
currentCameraImage.Updated = false;
}
}
_index++;
if(_index >= MAXCAMERAS)
_index = 0;
}
If the camera's (all together) will generate too many images, it will automatically skip images.

Currently you are updating all your cameras in a single thread, the UI thread. This makes the UI thread always freeze, even if you don't notice it.
What I'd recommend is using Parallel.For to update the camera feeds on (multiple) separate threads, then using the UI dispatcher to update the image on the UI.
Something like this:
while ( isDisplayThreadRunning ) {
//start a new parallel for loop
Parallel.For( 0, numOfCamera, num => {
BitmapSource img = bitmapQueue[i].Serve(); //Pop the frame from Queue
//draw the new image on the UI thread
Global.mainWindow.Dispatcher.Invoke(
new Action( delegate
{
ControlDisplay[i].DrawImage( img ); //Draw this frame on ControlDisplay[i]
} ) );
} );
Thread.Sleep( 50 );//sleep if desired, lowers CPU usage by limiting the max framerate
}
}

Related

Game not printing or updating when AI is thinking

I have a game I am developing in Unity where AI is doing large calculations when it is its turn. It searches the position to depth 1, then 2, then 3 etc. Between each depth I want to instantiate a Gameobject with info about the depth to UI. The problem is that nothing happens until the AI is completely finished, then all items are added at once. Here is some code to explain better:
private void AIMakeMove()
{
for (int currentDepth = 1; currentDepth < maxDepth + 1; currentDepth++)
{
SearchPosition(currentDepth);
}
}
private void SearchPosition(int _currentDepth)
{
// Search the position to the given depth
score = Negamax(_currentDepth);
// Print things PROBLEM HERE
GameObject printItem = Instantiate(printItemPrefab, printItemParent.transform);
Debug.Log(_currentDepth);
}
I also tried just a simple Debug.Log instead of Instantiate but same thing happens then, all prints to console happens after the AI is done with its thinking process.
Why is my UI not updating with information? I tell it to create some things after it run the first iteration with depth 0 but it skips this step and goes on depth 2 instead. Can someone please let me know how to get information out between each depth?
The problem is that nothing happens until the AI is completely finished
well the UI is only updated if the Unity main-thread is allowed to finish a frame.
You, however, block the main thread until all iterations are finished.
If it is okey for you to block between each instantiation then you could simply use a Coroutine and do something like
private void AIMakeMove()
{
StartCoroutine(AIMakeMoveRoutine());
}
private IEnuemrator AIMakeMoveRoutine()
{
for (int currentDepth = 1; currentDepth < maxDepth + 1; currentDepth++)
{
SearchPosition(currentDepth);
// This now tells Unity to "interrupt" this routine here
// render the current frame and continue from here in the next frame
yield return null;
}
}
private void SearchPosition(int _currentDepth)
{
score = Negamax(_currentDepth);
GameObject printItem = Instantiate(printItemPrefab, printItemParent.transform);
Debug.Log(_currentDepth);
}
This will finish a frame and start a new one (thus refresh the UI) after each finished iteration.
However, if this still blocks the rest of your application too much you should additionally actually run the calculation async e.g. using a Task like
private void AIMakeMove()
{
StartCoroutine(AIMakeMoveRoutine());
}
private IEnuemrator AIMakeMoveRoutine()
{
for (int currentDepth = 1; currentDepth < maxDepth + 1; currentDepth++)
{
// you can yield another IEnuemrator -> executes this and waits for it to finish
yield return SearchPosition(currentDepth);
// This now tells Unity to "interrupt" this routine here
// render the current frame and continue from here in the next frame
yield return null;
}
}
private IEnumerator SearchPosition(int _currentDepth)
{
// run the NegamaxTask asynchronously in the background
var task = Task.Run(() => Negamax(_currentDepth));
// wait for the task to finish
while(!task.IsCompleted)
{
// do nothing but skip frames to allow the rest of the application to run smoothly
yield return null;
}
// If you do nothing else inside the loop this could also be written as
//yield return new WaitWhile(() => !task.IsComoleted);
// or
//yield return new WaitUntil(() => task.IsCompleted);
// since the task is already finished it is save / non-blocking to access the result now
score = task.Result;
var printItem = Instantiate(printItemPrefab, printItemParent.transform);
Debug.Log(_currentDepth);
}
Now this allows your application to continue with a normal frame-rate while in the background you do the heavy calculations and once in a while get a result back when an iteration is finished.
Try using a thread:
private void AIMakeMove()
{
new System.Threading.Thread(() =>
{
for (int currentDepth = 1; currentDepth < maxDepth + 1; currentDepth++)
{
SearchPosition(currentDepth);
}
}).Start();
}
private void SearchPosition(int _currentDepth)
{
// Search the position to the given depth
score = Negamax(_currentDepth);
// Print things PROBLEM HERE
GameObject printItem = Instantiate(printItemPrefab, printItemParent.transform);
Debug.Log(_currentDepth);
}

Is it a good idea to create a new thread every 10 frames in Unity?

private void RunEveryTenFrames(Color32[] pixels, int width, int height)
{
var thread = new Thread(() =>
{
Perform super = new HeavyOperation();
if (super != null)
{
Debug.Log("Result: " + super);
ResultHandler.handle(super);
}
});
thread.Start();
}
I'm running this function every 10 frames in Unity. Is this a bad idea. Also, when I try to add thread.Abort() inside the thread, it says thread is not defined and can't use local variable before it's defined error.
Is it a good idea to create a new thread every 10 frames in Unity?
No. 10 frames is too small for repeatedly creating new Thread.
Creating new Thread will cause overhead each time. It's not bad when done once in a while. It is when done every 10 frames. Remember this is not every 10 seconds. It is every 10 frames.
Use ThreadPool. By using ThreadPool with ThreadPool.QueueUserWorkItem, you are re-using Thread that already exist in the System in instead of creating new ones each time.
Your new RunEveryTenFrames function with ThreadPool should look something like this:
private void RunEveryTenFrames(Color32[] pixels, int width, int height)
{
//Prepare parameter to send to the ThreadPool
Data data = new Data();
data.pixels = pixels;
data.width = width;
data.height = height;
ThreadPool.QueueUserWorkItem(new WaitCallback(ExtractFile), data);
}
private void ExtractFile(object a)
{
//Retrive the parameters
Data data = (Data)a;
Perform super = new HeavyOperation();
if (super != null)
{
Debug.Log("Result: " + super);
ResultHandler.handle(super);
}
}
public struct Data
{
public Color32[] pixels;
public int width;
public int height;
}
I you ever need to call into Unity's API or use Unity's API from this Thread, see my other post or how to do that.

How to extract timestamps from each frame obtained by USB camera?

This scene is common in real time video proccessing. And I need timestamps to synchronize with other devices.
I have tried cv::VideoCapture, but it can not extract the timestamps frome video stream.
So I have two questions here:
Does video stream provided by USB camera indeed contains the timestamp information ?
If it has. What should I do to extract it ? A C# solution is best, while C++ is OK.
Addition:
Using these two properties doesn't work:
secCounter = (long) cap.get(CAP_PROP_POS_MSEC);
frameNumber = (long) cap.get(CAP_PROP_POS_FRAMES);
It always gives the following result:
VIDEOIO ERROR: V4L2: getting property #1 is not supported
msecCounter = 0
frameNumber = -1
OpenCV's VideoCapture class is a very high level interface to retrieve frames from a camera, so it "hides" a lot of the details that are necessary to connect to the camera, retrieve frames from the camera, and decode those frames in to a useful color space like BGR. This is nice because you don't have to worry about the details of grabbing frames, but the downside is that you don't have direct access to other data you might want, like the frame number or frame timestamp. That doesn't mean it's impossible to get the data you want, though!
Here's a sample frame grabbing loop that will get you what you want, loosely based on the example code from here. This is in C++.
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
// TODO: change the width, height, and capture FPS to your desired
// settings.
cap.set(CAP_PROP_FRAME_WIDTH, 1920);
cap.set(CAP_PROP_FRAME_HEIGHT, 1080);
cap.set(CAP_PROP_FPS, 30);
Mat frame;
long msecCounter = 0;
long frameNumber = 0;
for(;;)
{
// Instead of cap >> frame; we'll do something different.
//
// VideoCapture::grab() tells OpenCV to grab a frame from
// the camera, but to not worry about all the color conversion
// and processing to convert that frame into BGR.
//
// This means there's less processing overhead, so the time
// stamp will be more accurate because we are fetching it
// immediately after.
//
// grab() should also wait for the next frame to be available
// based on the capture FPS that is set, so it's okay to loop
// continuously over it.
if(cap.grab())
{
msecCounter = (long) cap.get(CAP_PROP_POS_MSEC);
frameNumber = (long) cap.get(CAP_PROP_POS_FRAMES);
// VideoCapture::retrieve color converts the image and places
// it in the Mat that you provide.
if(cap.retrieve(&frame))
{
// Pass the frame and parameters to your processing
// method.
ProcessFrame(&frame, msecCounter, frameNumber);
}
}
// TODO: Handle your loop termination condition here
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
void ProcessFrame(Mat& frame, long msecCounter, long frameNumber)
{
// TODO: Make a copy of frame if you are going to process it
// asynchronously or put it in a buffer or queue and then return
// control from this function. This is because the reference Mat
// being passed in is "owned" by the processing loop, and on each
// iteration it will be destructed, so any references to it will be
// invalid. Hence, if you do any work async, you need to copy frame.
//
// If all your processing happens synchronously in this function,
// you don't need to make a copy first because the loop is waiting
// for this function to return.
// TODO: Your processing logic goes here.
}
If you're using C# and Emgu CV it will look a bit different. I haven't tested this code, but it should work or be very close to the solution.
using System;
using Emgu.CV;
using Emgu.CV.CvEnum;
static class Program
{
[STAThread]
static void Main()
{
VideoCapture cap = new VideoCapture(0);
if(!cap.IsOpened)
{
return;
}
cap.SetCaptureProperty(CapProp.FrameWidth, 1920);
cap.SetCaptureProperty(CapProp.FrameHeight, 1080);
cap.SetCaptureProperty(CapProp.Fps, 30);
Mat frame = new Mat();
long msecCounter = 0;
long frameNumber = 0;
for(;;)
{
if(cap.Grab())
{
msecCounter = (long) cap.GetCaptureProperty(CapProp.PosMsec);
frameNumber = (long) cap.GetCaptureProperty(CapProp.PosFrames);
if(cap.Retrieve(frame))
{
ProcessFrame(frame, msecCounter, frameNumber);
}
}
// TODO: Determine when to quit the processing loop
}
}
private static void ProcessFrame(Mat frame, long msecCounter, long frameNumber)
{
// Again, copy frame here if you're going to queue the frame or
// do any async processing on it.
// TODO: Your processing code goes here.
}
}
Emgu's VideoCapture implementation also allows for asynchronous Grab operations to be done for you, and notifications when a grabbed frame is ready to be used with Retrieve. That looks like this:
using System;
using Emgu.CV;
using Emgu.CV.CvEnum;
static class Program
{
private static Mat s_frame;
private static VideoCapture s_cap;
private static object s_retrieveLock = new object();
[STAThread]
static void Main()
{
s_cap = new VideoCapture(0);
if(!s_cap.IsOpened)
{
return;
}
s_frame = new Mat();
s_cap.SetCaptureProperty(CapProp.FrameWidth, 1920);
s_cap.SetCaptureProperty(CapProp.FrameHeight, 1080);
s_cap.SetCaptureProperty(CapProp.Fps, 30);
s_cap.ImageGrabbed += FrameIsReady;
s_cap.Start();
// TODO: Wait here until you're done with the capture process,
// the same way you'd determine when to exit the for loop in the
// above example.
s_cap.Stop();
s_cap.ImageGrabbed -= FrameIsReady;
}
private static void FrameIsReady(object sender, EventArgs e)
{
// This function is being called from VideoCapture's thread,
// so if you rework this code to run with a UI, be very careful
// about updating Controls here because that needs to be Invoke'd
// back to the UI thread.
// I used a lock here to be extra careful and protect against
// re-entrancy, but this may not be necessary if Emgu's
// VideoCapture thread blocks for completion of this event
// handler.
lock(s_retrieveLock)
{
msecCounter = (long) s_cap.GetCaptureProperty(CapProp.PosMsec);
frameNumber = (long) s_cap.GetCaptureProperty(CapProp.PosFrames);
if(s_cap.Retrieve(s_frame))
{
ProcessFrame(s_frame, msecCounter, frameNumber);
}
}
}
private static void ProcessFrame(Mat frame, long msecCounter, long frameNumber)
{
// Again, copy frame here if you're going to queue the frame or
// do any async processing on it.
// TODO: Your processing code goes here.
}
}

C# Events do not trigger while GUI Thread is busy (drawing). How to solve?

Short intro to my program (Solved)
My program is taking an image and reveals tile after tile (guessing game).
I decided to go with fading in (manually) each tile.
Since I need the user to be able to interact with the GUI, I do the calculations and thread blocking things like Thread.Sleep on another thread. I also added an OnClickEvent to a picturebox (which overlays the image to reveal). => if someone guessed the image the user can click on the picturebox to fully reveal the image. (For fading I am clipping a region of the picturebox and then clear it with a color. The color's alpha value is decreasing step by step until it is fully transparent. Then I go to the next region.)
Incoming problem
After each iteration I need to refresh the picturebox so that it displays the new "situation". Therefore I have to invoke the action on the GUI Thread.
Now if the time between each refresh becomes too short like 10 ms the GUI seems to be so busy refreshing/drawing the image that it doesn't fire my OnClickEvent anymore.
The reveal function
public void Reveal(int step, int intervalFading, int intervalNextTile)
{
StopThread = false; // Changed by the OnClickEvent
Graphics grx = Graphics.FromImage(Overlay.Image);
step = 255 / step;
foreach (RectangleF R in AreasShuffled)
{
grx.Clip = new Region(R);
for (int i = 255; i >= 0; i-=step) //Fading out loop
{
Thread.Sleep(intervalFading); //if intervalFading < 15 GUI is too busy
if (StopThread) //Condition if someone guessed correctly
{
grx.ResetClip();
grx.Clear(Color.FromArgb(0, 0, 0, 0)); //revealing the image
ParentControl.BeginInvoke((Action)(() => Overlay.Refresh()));
grx.Dispose();
return;
}
grx.Clear(Color.FromArgb(i, 0, 0, 0)); //Clearing region
ParentControl.BeginInvoke((Action)(() => Overlay.Refresh())); //Redrawing image
}
grx.Clear(Color.FromArgb(0, 0, 0, 0));
ParentControl.BeginInvoke((Action)(() => Overlay.Refresh()));
Thread.Sleep(intervalNextTile);
}
grx.ResetClip();
grx.Clear(Color.FromArgb(0, 0, 0, 0));
ParentControl.BeginInvoke((Action)(() => Overlay.Refresh()));
grx.Dispose();
}
Solution
As adviced I used async tasks. Here is the updated function. (Yeah, I didn't update the grx.dispose() ^^)
public async Task Reveal(int step, int intervalFading, int intervalNextTile)
{
taskIsRunning = true;
stopTask = false; // Changed by the OnClickEvent
Graphics grx = Graphics.FromImage(Overlay.Image);
step = 255 / step;
foreach (RectangleF R in AreasShuffled)
{
grx.Clip = new Region(R);
for (int i = 255; i >= 0; i -= step)
{
await Task.Delay(intervalFading);
if (stopTask)
{
grx.ResetClip();
grx.Clear(Color.FromArgb(0, 0, 0, 0));
Overlay.Refresh();
grx.Dispose();
taskIsRunning = false;
return;
}
grx.Clear(Color.FromArgb(i, 0, 0, 0));
Overlay.Refresh();
}
grx.Clear(Color.FromArgb(0, 0, 0, 0));
Overlay.Refresh();
await Task.Delay(intervalNextTile);
}
grx.ResetClip();
grx.Clear(Color.FromArgb(0, 0, 0, 0));
Overlay.Refresh();
grx.Dispose();
taskIsRunning = false;
}
and the calling function that checks whether the task is running or not
private void pictureBoxOverlay_Click(object sender, EventArgs e)
{
if (UCM != null && UCM.taskIsRunning) //if it is running the function is being notified
{ //and reveals the image.
UCM.stopTask = true;
}
else //makes sure that the user has to click again to start with the next image
{
if (index < Images.Count - 1)
{
PrepareNextImage();
UCM.Reveal(Properties.Settings.Default.steps, Properties.Settings.Default.fadeInterval, Properties.Settings.Default.nextTileInterval);
}
else
MessageBox.Show("End of presentation.");
}
}
Thanks for your help ;)
There is a short answer here. Don't call Thread.Sleep on the UI thread if you expect your UI to be responsive
Update
It appears that you are not running your animation code of the UI thread. Good stuff! So what could be the problem? I suspect the 4 calls to BeginInvoke many times per second is causing the application message pump to flood with invoke events and delay the GUI updates while servicing them.
Fix this by reducing the number of invokes. Do all your updates in a single invoke per interval.
This short example invokes back to the calling context only once each interval. You should call it from the UI.
async Task Animate(Control control, int interval)
{
while(true)
{
// this line causes the method to pause by queueing
// everything after await similarly to `BeginInvoke`
await Task.Delay(interval);
// all of this still happens on the UI thread
// increment control properties here
// check to see if the animation should end.
if (END STATE IS MET)
{
return;
}
}
}
As a side note - you are calling grx.Dispose a few times. It may be better to wrap the whole code block in using(grx){ }. This still works with async! How? Darkest magicks.

parallel image processing artifacts

I capture images from a webcam, do some heavy processing on them, and then show the result. To keep the framerate high, i want to have the processing of different frames run in parallel.
So, I have a 'Producer', which captures the images and adds these to the 'inQueue'; also it takes an image from the 'outQueue' and displays it:
public class Producer
{
Capture capture;
Queue<Image<Bgr, Byte>> inQueue;
Queue<Image<Bgr, Byte>> outQueue;
Object lockObject;
Emgu.CV.UI.ImageBox screen;
public int frameCounter = 0;
public Producer(Emgu.CV.UI.ImageBox screen, Capture capture, Queue<Image<Bgr, Byte>> inQueue, Queue<Image<Bgr, Byte>> outQueue, Object lockObject)
{
this.screen = screen;
this.capture = capture;
this.inQueue = inQueue;
this.outQueue = outQueue;
this.lockObject = lockObject;
}
public void produce()
{
while (true)
{
lock (lockObject)
{
inQueue.Enqueue(capture.QueryFrame());
if (inQueue.Count == 1)
{
Monitor.PulseAll(lockObject);
}
if (outQueue.Count > 0)
{
screen.Image = outQueue.Dequeue();
}
}
frameCounter++;
}
}
}
There are different 'Consumers' who take an image from the inQueue, do some processing, and add them to the outQueue:
public class Consumer
{
Queue<Image<Bgr, Byte>> inQueue;
Queue<Image<Bgr, Byte>> outQueue;
Object lockObject;
string name;
Image<Bgr, Byte> image;
public Consumer(Queue<Image<Bgr, Byte>> inQueue, Queue<Image<Bgr, Byte>> outQueue, Object lockObject, string name)
{
this.inQueue = inQueue;
this.outQueue = outQueue;
this.lockObject = lockObject;
this.name = name;
}
public void consume()
{
while (true)
{
lock (lockObject)
{
if (inQueue.Count == 0)
{
Monitor.Wait(lockObject);
continue;
}
image = inQueue.Dequeue();
}
// Do some heavy processing with the image
lock (lockObject)
{
outQueue.Enqueue(image);
}
}
}
}
Rest of the important code is this section:
private void Form1_Load(object sender, EventArgs e)
{
Consumer[] c = new Consumer[consumerCount];
Thread[] t = new Thread[consumerCount];
Object lockObj = new object();
Queue<Image<Bgr, Byte>> inQueue = new Queue<Image<Bgr, Byte>>();
Queue<Image<Bgr, Byte>> outQueue = new Queue<Image<Bgr, Byte>>();
p = new Producer(screen1, capture, inQueue, outQueue, lockObj);
for (int i = 0; i < consumerCount; i++)
{
c[i] = new Consumer(inQueue, outQueue, lockObj, "c_" + Convert.ToString(i));
}
for (int i = 0; i < consumerCount; i++)
{
t[i] = new Thread(c[i].consume);
t[i].Start();
}
Thread pt = new Thread(p.produce);
pt.Start();
}
The parallelisation actually works fine, I do get a linear speed increase with each added thread (up to a certain point of course). The problem is that I get artifacts in the output, even if running only one thread. The artifacts look like part of the picture is not in the right place.
Example of the artifact (this is without any processing to keep it clear, but the effect is the same)
Any ideas what causes this?
Thanks
Displaimer: This post isn't supposed to fully describe an answer, but instead give some hints on why the artifact is being shown.
A quick analysis show that the the actifact is, in fact, a partial, vertically mirrored snippet of a frame. I copied it, mirrored, and placed it back over the image, and added an awful marker to show its placement:
Two things immediately come to attention:
The artifact is roughly positioned on the 'correct' place it would be, only that the position is also vertically mirrored;
The image is slightly different, indicating that it may belong to a different frame.
It's been a while since I played around with raw capture and ran into a similar issue, but I remember that depending on how the driver is implemented (or set up - this particular issue happened when setting a specific imaging device for interlaced capture) it may fill its framebuffer alternating between 'top-down' and 'bottom-up' scans - as soon as the frame is full, the 'cursor' reverts direction.
It seems to me that you're running into a race condition/buffer underrun situation, where the transfer from the framebuffer to your application is happening before the full frame is transferred by the device.
In that case, you'd receive a partial image, and the area still not refreshed would show a bit of the previously transferred frame.
If I'd have to bet, I'd say that the artifact may appear on sequential order, not on the same position but 'fluctuating' on a specific direction (up or down), but always as a mirrored bit.
Well, I think the problem is here . The section of code is not guarantee that you will be access by one thread in here between two queue. The image is pop by inQueue is not actually received in order in outQueue
while (true)
{
lock (lockObject)
{
if (inQueue.Count == 0)
{
Monitor.Wait(lockObject);
continue;
}
image = inQueue.Dequeue();
}
// Do some heavy processing with the image
lock (lockObject)
{
outQueue.Enqueue(image);
}
}
Similar to #OnoSendai, I'm not trying to solve the exact problem as stated. I would have to write an app and I just don't have the time. But, the two things that I would change right away would be to use the ConcurrentQueue class so that you have thread-safety. And, I would use the Task library functions in order to create parallel tasks on different processor cores. These are found in the System.Net and System.Net.Task namespaces.
Also, vertically flipping a chunk like that looks like more than an artifact to me. If it also happens when executing in a single thread as you mentioned, then I would definitely re-focus on the "heavy processing" part of the equation.
Good luck! Take care.
You may have two problems:
1) parallism doesn't ensure that images are added to the out queue in the right order. I imagine that displaying image 8 before image 6 and 7 can produce some artifacts. In consumer thread, you have to wait previous consumer have posted its image to the out queue to post next image. Tasks can help greatly for that because of their inherent synchronisation mecanism.
2) You may also have problems in the rendering code.

Categories