Can I speed up callbacks from C++ Dll to C#? - c#

We've implemented our main code functionality in a C++ Dll then written a C# UI on top. This works fine most of the time, but in areas where we want the UI to report the progress of a function in C++ we're having some performance problems.
We pass a pointer to a C# function down to the Dll to use for progress updates. When it's called we use InvokeRequired and Invoke() to make sure the callback is thread-safe. Measured from the C++ side, this can take anything between 16ms and 180ms. Is there any way of reducing the time this takes?
One of these callbacks passes the location of a line to be drawn on the screen. This drawing currently very slow - I assume the invoked functions are queuing up and taking time to be drawn in C#. I can see two ways of dealing with this:
1. Change the callback to send a list of lines and allow the lines to queue up in C++ whilst waiting for the previous call to C# to complete.
2. Adding the lines to a queue in C# (preferably without calling Invoke) then drawing all lines that are available at once.
Any suggestions on how to do this, whether both options are required, or alternative methods?
Thanks for any and all help.

You can use BeginInvoke instead of Invoke. Invoke waits for the function to return, while BeginInvoke allows the function to run parallel to the current thread.

Your problem is not the callback but likelys the invoke. Invoke is a potentially expensive operation. If you ahve a thread change there, it WILL take time. A lot. You alsoa re putting the drawing operations into the same thread, so the callback will block until those are finished.
What you can do:
BeginInvoke - basically the callback erturns early, another thread does the drawing.
Reduce the number of invokes, basically batching your requests. This can be done OUTSIDE The C++ dll - have the callback put the corodinates into a queue, then the queue in a separate thread call BeginInvoke.
Basically, your single threaded UI and the requirement of thread switching via dispatcher kill you, totally within the C# area.
At the end, your question is badly worded. C++ & C# has nothing to do with the problem, you would have the same issue with C# only. You have an API making callbacks into the UI that needs to switch threads (Invoke) and do UI operations, and this leads yo t he callback taking more time than you want. Replace C## with Smalltalk, Assembler, C#, Visual basic and the problem stays 100% the same.

As far as I understand your question, there is only one line, which should be drawn. But its coordinates are frequently generated in the unmanaged part and sent to C# to update the graphical representation? Consider enabling the drawing method to cancel itself. So when - while drawing a line - new coordinates arrive, discard the current line and just draw the new coordinates. That way you may get rid of the need to implement any queueing at all.
#Edit: I tend to the suggestion to better query all necessary data from the C++ part by C#. It sounds like you are calling back to .NET very often in order to control some visual output. But this should be done from .NET (which knows much better, how often it is possible at all). So, querying all data from C++ in a user defined structure may help.

Related

Is Application.DoEvents() a form of Multitasking?

I am pretty sure Applicataion.DoEvents() in Windows Forms. is a very early, very primitive, WindowsForms only form of Multitasking. It has all the telltales and mechanics:
Pausing execution of the calling Event.
Making the rest of said Event a continuation to be run later.
Allowing the other Events/Processes to run. Just with some extra issues, because the MT is implemented via the EventQueue. Possibly even a recursive call to the Queue.
But I just ran into a person that insists it has "nothing to do with Multitasking", which I cannot reconcile with my understanding of the Function or the of Multitasking.
Note: I explicitly consider Mutltithreading only an implementation for Multitasking. It is clear that DoEvents() is not a form of Multithreading, as we all know how poorly that one works in GUI Environments.
I am pretty sure it is a very early, very primitive, Windows Forms only form of Multitasking
You are pretty close to correct on all counts except for your conjecture that it is for WinForms only. "DoEvents" precedes WinForms; it was present in Visual Basic long before WinForms was invented, and "pump the message queue" obviously precedes VB also. And it was a bad idea and easily abused then too.
Making the rest of said Event a continuation to be run later.
DoEvents doesn't really make anything into a continuation the way that say, await does. Whatever event is currently "in flight" when DoEvents is called has its state on the stack, and the stack is the implementation of continuation in this case. This is another point against DoEvents -- unlike await, it eats stack, and therefore can contribute to an overflow.
I just ran into a poster that insists it has "nothing to do with Multitasking".
You should ask the author for clarification then, since that certainly sounds wrong.
Well, it's called Preemptive Multitasking, meaning "interrupting a task". You do multiple Tasks, but never two at the same time. It's not about using multiple cores of the CPU, but a way to control multiple activities inside your program.
Common Sample is, to give the program a chance to handle mouse movement by the user, while doing a lengthy operation, running something that can be considered a "batch"-job.
Normally you don't have to care about this "DoEvents", but if you know, you have a procedure running for more than 1 second, you should call it manually, you pass the control to another method thereby, you stop your own code, let other code run, and than you continue with your own code.
So it's never asynchronous, but still some kind of "multitasking".
It's more a control structure, the important thing is, you do not know what's going on inside, you call it "just for case" - somebody else might need the CPU for a millisecond.
There is no external task scheduler interrupting your code and doing a context switch, you have to "behave" by interrupting your code yourself, if you do something lengthy. It is a convention that you do only "small" things in event handlers and return the control to Windows as soon as possible,either by finishing the method, or by calling DoEvents.

Avoiding main thread lockup when delegating to GUI

I have a RichTextBox that the Console is redirected to. The Console Redirector delegates the AppendText() call each time the console is written to. However, the GUI locks up while the text is being appended, and since the log is written to in periods of rapid succession, the main thread/GUI locks up until the text is no longer being appended. Is there a way to allow control of the form while the log is being appended from another thread?
No, you cannot safely update the UI from a non-UI thread.
If you have other UI work that you want done you'll need to have your console redirect function simply spend less time updating the UI. Don't have it update the UI with everything all the time. Have it buffer the data and write to the UI less frequently, or throttle the console input if there is simply too much data to display everything (while also doing other necessary work).
All in all, no you cannot get away from the lockedness. The UI thread will be locked when working, and when you dispatch to it you are in essence saying, I want to run this piece of code on the UI thread.
To alleviate some of the "lockedness" you need to try to be "smart" about it.
Update as rarely as possible to manage this, use some sort of "buffer" to update. Possibly make a "fake" UI class (aka a model/DTO) fill it with data from your thread and flush it out to the UI when needed/on demand/on complete
In the delegates as much as possible. DO NOT perform any form of logic as that's work that will lock you for a longer period than needed.
I see that you are using winforms, if this is a project where you are in control, then go for WPF.

How to see how much prossesing time a C# Windows Forms application needs?

I have a C# Windows Forms application wicht does some camera control and computer vision. For all the parts which take longer for calculation I used seperate threads. But there are still some parts which are in the callback functions of the GUI. As I understand, all these callback functions are executed in the same thread. Is there a way to see how much time this thread is working or idle? What percentage of idle time is needed such that the GUI is still responsive?
It's recommended that you shouldn't block the UI thread for more than 50ms, otherwise it will affect the UI responsiveness. I.e., two UI callbacks queued with Form.BeginInvoke, each taking ~50ms to complete, may introduce some unpleasant UI experience to the user.
It doesn't make sense to update the UI more often than the user can react to it (i.e, ~24 frames per second). So, you should throttle the UI thread callbacks and give user input events a priority.
I recently posted an example of how it can possibly be done:
https://stackoverflow.com/a/21654436/1768303
For simple tasks you could use a stopwatch and measure the time manually. However I think you'll need to check what a performance profiler is.
Also - there is little situations in which your GUI needs that heavy processing. In most cases the problem comes from putting too much calculations in event handlers instead of implementing them somewhere outside and then update the form when finished. It's less of a single/multi-threading problem and more of using available events properly.

Use of Application.DoEvents()

Can Application.DoEvents() be used in C#?
Is this function a way to allow the GUI to catch up with the rest of the app, in much the same way that VB6's DoEvents does?
Hmya, the enduring mystique of DoEvents(). There's been an enormous amount of backlash against it, but nobody ever really explains why it is "bad". The same kind of wisdom as "don't mutate a struct". Erm, why does the runtime and the language supports mutating a struct if that's so bad? Same reason: you shoot yourself in the foot if you don't do it right. Easily. And doing it right requires knowing exactly what it does, which in the case of DoEvents() is definitely not easy to grok.
Right off the bat: almost any Windows Forms program actually contains a call to DoEvents(). It is cleverly disguised, however with a different name: ShowDialog(). It is DoEvents() that allows a dialog to be modal without it freezing the rest of the windows in the application.
Most programmers want to use DoEvents to stop their user interface from freezing when they write their own modal loop. It certainly does that; it dispatches Windows messages and gets any paint requests delivered. The problem however is that it isn't selective. It not only dispatches paint messages, it delivers everything else as well.
And there's a set of notifications that cause trouble. They come from about 3 feet in front of the monitor. The user could for example close the main window while the loop that calls DoEvents() is running. That works, user interface is gone. But your code didn't stop, it is still executing the loop. That's bad. Very, very bad.
There's more: The user could click the same menu item or button that causes the same loop to get started. Now you have two nested loops executing DoEvents(), the previous loop is suspended and the new loop is starting from scratch. That could work, but boy the odds are slim. Especially when the nested loop ends and the suspended one resumes, trying to finish a job that was already completed. If that doesn't bomb with an exception then surely the data is scrambled all to hell.
Back to ShowDialog(). It executes DoEvents(), but do note that it does something else. It disables all the windows in the application, other than the dialog. Now that 3-feet problem is solved, the user cannot do anything to mess up the logic. Both the close-the-window and start-the-job-again failure modes are solved. Or to put it another way, there is no way for the user to make your program run code in a different order. It will execute predictably, just like it did when you tested your code. It makes dialogs extremely annoying; who doesn't hate having a dialog active and not being able to copy and paste something from another window? But that's the price.
Which is what it takes to use DoEvents safely in your code. Setting the Enabled property of all your forms to false is a quick and efficient way to avoid problems. Of course, no programmer ever actually likes doing this. And doesn't. Which is why you shouldn't use DoEvents(). You should use threads. Even though they hand you a complete arsenal of ways to shoot your foot in colorful and inscrutable ways. But with the advantage that you only shoot your own foot; it won't (typically) let the user shoot hers.
The next versions of C# and VB.NET will provide a different gun with the new await and async keywords. Inspired in small part by the trouble caused by DoEvents and threads but in large part by WinRT's API design that requires you to keep your UI updated while an asynchronous operation is taking place. Like reading from a file.
It can be, but it's a hack.
See Is DoEvents Evil?.
Direct from the MSDN page that thedev referenced:
Calling this method causes the current
thread to be suspended while all
waiting window messages are processed.
If a message causes an event to be
triggered, then other areas of your
application code may execute. This can
cause your application to exhibit
unexpected behaviors that are
difficult to debug. If you perform
operations or computations that take a
long time, it is often preferable to
perform those operations on a new
thread. For more information about
asynchronous programming, see
Asynchronous Programming Overview.
So Microsoft cautions against its use.
Also, I consider it a hack because its behavior is unpredictable and side effect prone (this comes from experience trying to use DoEvents instead of spinning up a new thread or using background worker).
There is no machismo here - if it worked as a robust solution I would be all over it. However, trying to use DoEvents in .NET has caused me nothing but pain.
Yes, there is a static DoEvents method in the Application class in the System.Windows.Forms namespace. System.Windows.Forms.Application.DoEvents() can be used to process the messages waiting in the queue on the UI thread when performing a long-running task in the UI thread. This has the benefit of making the UI seem more responsive and not "locked up" while a long task is running. However, this is almost always NOT the best way to do things.
According to Microsoft calling DoEvents "...causes the current thread to be suspended while all waiting window messages are processed." If an event is triggered there is a potential for unexpected and intermittent bugs that are difficult to track down. If you have an extensive task it is far better to do it in a separate thread. Running long tasks in a separate thread allows them to be processed without interfering with the UI continuing to run smoothly. Look here for more details.
Here is an example of how to use DoEvents; note that Microsoft also provides a caution against using it.
From my experience I would advise great caution with using DoEvents in .NET. I experienced some very strange results when using DoEvents in a TabControl containing DataGridViews. On the other hand, if all you're dealing with is a small form with a progress bar then it might be OK.
The bottom line is: if you are going to use DoEvents, then you need to test it thoroughly before deploying your application.
Yes.
However, if you need to use Application.DoEvents, this is mostly an indication of a bad application design. Perhaps you'd like to do some work in a separate thread instead?
I saw jheriko's comment above and was initially agreeing that I couldn't find a way to avoid using DoEvents if you end up spinning your main UI thread waiting for a long running asynchronous piece of code on another thread to complete. But from Matthias's answer a simple Refresh of a small panel on my UI can replace the DoEvents (and avoid a nasty side effect).
More detail on my case ...
I was doing the following (as suggested here) to ensure that a progress bar type splash screen (How to display a "loading" overlay...) updated during a long running SQL command:
IAsyncResult asyncResult = sqlCmd.BeginExecuteNonQuery();
while (!asyncResult.IsCompleted) //UI thread needs to Wait for Async SQL command to return
{
System.Threading.Thread.Sleep(10);
Application.DoEvents(); //to make the UI responsive
}
The bad: For me calling DoEvents meant that mouse clicks were sometimes firing on forms behind my splash screen, even if I made it TopMost.
The good/answer: Replace the DoEvents line with a simple Refresh call to a small panel in the centre of my splash screen, FormSplash.Panel1.Refresh(). The UI updates nicely and the DoEvents weirdness others have warned of was gone.
I've seen many commercial applications, using the "DoEvents-Hack". Especially when rendering comes into play, I often see this:
while(running)
{
Render();
Application.DoEvents();
}
They all know about the evil of that method. However, they use the hack, because they don't know any other solution. Here are some approaches taken from a blog post by Tom Miller:
Set your form to have all drawing occur in WmPaint, and do your rendering there. Before the end of the OnPaint method, make sure you do a this.Invalidate(); This will cause the OnPaint method to be fired again immediately.
P/Invoke into the Win32 API and call PeekMessage/TranslateMessage/DispatchMessage. (Doevents actually does something similar, but you can do this without the extra allocations).
Write your own forms class that is a small wrapper around CreateWindowEx, and give yourself complete control over the message loop.
-Decide that the DoEvents method works fine for you and stick with it.
Check out the MSDN Documentation for the Application.DoEvents method.
The DoEvents does allow the user to click around or type and trigger other events, and background threads are a better approach.
However, there are still cases where you may run into issues that require flushing event messages. I ran into a problem where the RichTextBox control was ignoring the ScrollToCaret() method when the control had messages in queue to process.
The following code blocks all user input while executing DoEvents:
using System;
using System.Runtime.InteropServices;
using System.Windows.Forms;
namespace Integrative.Desktop.Common
{
static class NativeMethods
{
#region Block input
[DllImport("user32.dll", EntryPoint = "BlockInput")]
[return: MarshalAs(UnmanagedType.Bool)]
private static extern bool BlockInput([MarshalAs(UnmanagedType.Bool)] bool fBlockIt);
public static void HoldUser()
{
BlockInput(true);
}
public static void ReleaseUser()
{
BlockInput(false);
}
public static void DoEventsBlockingInput()
{
HoldUser();
Application.DoEvents();
ReleaseUser();
}
#endregion
}
}
Application.DoEvents can create problems, if something other than graphics processing is put in the message queue.
It can be useful for updating progress bars and notifying the user of progress in something like MainForm construction and loading, if that takes a while.
In a recent application I've made, I used DoEvents to update some labels on a Loading Screen every time a block of code is executed in the constructor of my MainForm. The UI thread was, in this case, occupied with sending an email on a SMTP server that didn't support SendAsync() calls. I could probably have created a different thread with Begin() and End() methods and called a Send() from their, but that method is error-prone and I would prefer the Main Form of my application not throwing exceptions during construction.

DoEvents In a DLL

Do you have any ideas how to call DoEvents from a C# DLL
Don't. It's sketchy enough when you are the app controlling the event loop; pumping messages from a DLL just adds to the risk that you'll end up hitting code a naive programmer didn't make safe for re-entrancy.
Do you mean System.Windows.Forms.Application.DoEvents()?
I'd echo the "don't" (either from a dll or a UI project). There are several things you can do to make library code play nicely with a UI,including the same threading tricks (with events and/or callbacks) that you might use from the UI. The simplest approach is for the library code to simply execute "as is", and if the UI happens to spawn it on a worker thread, then it is the UI's job to handle any events and marshal them (Control.Invoke/BeginInvoke) to the UI thread for UI updates.
For more complex scenarios, you can use the sync-context (SynchronizationContext.Current) to post messages to the UI thread (this is actually how Control.Invoke etc work, but it is implementation independent - so a WPF sync-context can go through the WPF dispatcher).
Can you add more context on what the scenario is? There are many things that can be done...
Write a interface for the EXE and have your main form or main class implement it.
Then register that object implementing the interface with the DLL.
Assign it to a variable of the that interface type
Make a Subroutine that is visible throughout the DLL.
In the Subroutine check to see if the variable is nothing if isn't then the subroutine that fires method you created to fire DoEvents.
Anytime you need to do a DoEvents then call the Subroutine.
If you are using a three tier organization for your application put the subroutine or variable on the object representing your entire application. Have the form register itself with the Application object.
If you have re-entrancy problems you can add status variable and other helper functions to safely check what the application is doing. Note that is far far easier to implements this if you are using some type of multi-tier design.
To other looking at this re-entrancy is a problem however so is a non-responsive UI. In a complex application there are circumstances where you have to let the event loop fire.

Categories