Like in this example:
someImage.Source = newSource;
someImage.refresh();
A few days ago in this post I responded with refresh() and I got feedback that it's a hack/abuse. I don't understand why.
The MSDN has the answer.
Control.Refresh:
Forces the control to invalidate its client area and immediately redraw itself and any child controls.
Control.Invalidate:
Invalidates the entire surface of the control and causes the control to be redrawn. […] Calling the Invalidate method does not force a synchronous paint
[Emphasis mine]
The point is that Refresh, unlike Invalidate, forces a synchronous call, which effectively interrupts the default event flow in forms and cuts the line in the message queue. This may cause other window messages (events from the operating system) to be delayed.
The Refresh method call is not needed at all if you have a responsive user interface. Setting the Source property creates a message that invalidates the display of the control, so it will be refreshed automatically when that message is handled.
It's only if your code contains a long running loop, so that it doesn't handle messages at all for a long period, that you need to use the Refresh method. Such a long running loop should be avoided, as it causes the user interface to be unresponsive.
by simple words Refresh() will reloads the UI, when some change
Because most gui frameworks handle refreshes/updates automatically if you use them correctly.
With refresh you work around the symptom (something is not automatically updated) instead of solving the root cause.
The problem is that Refresh usually start to spread like a virus. You insert it in one place and suddenly you need it in a second place, and third etc.
Related
I should report several certain things to my GUI while another thread is running in the background, such as:
Progress Value
Elapsed Time
Number of Results Found in real-time
Number of Errors Occurred during the process
and so on
I can use this piece of code when I need to invoke the UI and change something:
private void DoInvoke(Action action)
{
try
{
if (InvokeRequired)
BeginInvoke(action);
else
action();
}
catch { }
}
It works well, GUI and background thread work very well and the info will be reported and shown in UI.
But there is a problem, because of so many contexts changing between the background thread and UI, the CPU usage will be very high! I need to update the UI values without this context changing and without CPU usage.
So I decided to make a class of needed values and send it to the background thread. so It is a reference in which both UI and background thread can access it.
and I have put an event handler inside the class, so whenever a value is changed it will invoke. in UI I have attached an event to this handler, so every time a value is changed in this class, the UI should update that value. But again I will face cross-thread error. How to handle such a thing? I do not want the high cpu usage and also I need real-time UI update.
There are various ways to approach this.
The first thing is to define "realtime". If your data changes every 1 millisecond, even if you were able to update the UI that fast, noone would be able to see it. As a guideline, we can only detect changes at around 60Hz (that's why videogames target that framerate). In practice, you probably want the UI to update in the 10-50Hz range.
The timer solution
A simple solution, which may or may not be appropriate, would be to setup a timer on the UI thread that fires at the appropriate rate and update your controls in the timer event handler.
The Invoke() / BeginInvoke() solution
Another option is to still use the BeginInvoke() approach, but:
Implement the logic to update all controls in a single function and only BeginInvoke() that one, so you only queue a single work item in the UI thread. If you were to do a BeginInvoke() for each control, you'd cause a context switch for each control.
Skip invoking a BeginInvoke() if a minimum time has not elapsed since the last update. For instance, if data has changed after 3 milliseconds, you could skip all updates until one happens after 50 milliseconds (that would give a max update rate of 20 Hz).
The complications
This will work fine if you have simple controls, however you could run into issues if you have complex ones, like graphs, of many many controls to update. In this case, it may take a long time to redraw them, so you could not be able to update the UI at the desired rate. In you BeginInvoke() too often and the UI thread can't keep up, the app will essentially freeze because it doesn't have time to handle the user input.
There could be other conditions that lead the main thread to be more busy than usual (resizing the window or other processing that takes max a couple of seconds and you didn't bother to run in a separate thread).
So, in my programs, I usually set a flag immediately before I call BeginInvoke(), and I clear it in the invoked function. The next time I have to call BeginInvoke(), I first check the flag. If it's still set, it means the UI thread was busy and still hasn't managed to update the UI. In that case, I skip the BeginInvoke().
Finally, if you have a lot of stuff going on (I had to update many graphs and views) you may also need to have your logic guarantee a minimum time from when the update code in the UI thread ends executing and when you queue a new update from your background thread. This guarantees there's some time left in the UI thread to process user input, while the thread is very busy updating the UI in the rest of the time.
Final notes
If a value has not changed, you want to avoid redrawing the relative control, because it's pointless. I expect most WinForms controls, like a label, to already not redraw if you set their Text to the same value they already have, but if you have custom controls, third party controls, or do things like clear a ListView and repopulate it, you want to make sure the code isn't causing a redraw when it's not needed.
I inherited a winforms app. It uses a third-part-closed control that renders documents and photos... It has only sync methods for opening a document. The problem is that my clients are dealing with really big documents (in the area of 2GB!!!) and opening these docs really "block" the UI thread... which is bad...
Common sense would make you think "Just off-load it to a background thread" but the question is "HOW"! See, to alter the control (because calling "Open" causes it to be altered) I need to Invoke it, and that causes the code to run o UI thread again... locking it up...
So I turned the table upside down. What if instead of creating the control on the main thread and passing it to a background thread for processing, I could create the control on the background thread, load it up (avoiding this way the cross-thread exception) and, when done, feed it to the main thread?!?
Right now what I need is to know how to definitively handle a control to another thread, and not only temporally...
I'm not sure if this is possible but you could try to:
create a new form on a secondary thread (this form will host your fancy control)
load the document from this secondary UI. It will be blocked but you can hide it and only display a
loading message on the main UI.
when the job is finished transfer the 'work' to main UI and main thread.
It's just an idea.
What you are asking to do is impossible. A Winforms control's thread affinity is determined when that control is created, and it cannot be changed.
The best solution is to not use that control. I doubt there's anything it does that cannot be implemented correctly and competently by someone else.
If you are okay running a completely different window in a second STA thread, then that would be the next best thing. That particular window will still be frozen while the document loads, but at least your main UI would still be okay. Note that you should not try to mix and match controls from different threads in the same window; that will lead to all kinds of headaches.
Finally, as a complete hack, you might consider going ahead and calling this Open() method in a background thread in spite of the control being owned by the main UI thread. On the admittedly shaky assumption that the only time that control will actually attempt to access the UI component itself would be at the very end of the Open() method operation, you can go ahead and catch the InvalidOperationException that is thrown, and use that as your signal that the document loading has completed. Then just invalidate the control in the main UI thread.
I'd give the odds of this last suggestion working no better than 50/50. It will depend on what the control actually does with the loaded data, and if it's some kind of composite control where it's relying on actually taking the result of its loading and copying that to a control as part of the Open() method, that part might fail and the control would not wind up properly initialized.
I am facing a problem that Application.DoEvents() can solve. The problem is that WebBrowser suppose to navigate to a url asnychronously but it does't, and when I use Application.DoEvents() it solves that, I think this happens because the application handles some other events and doesn't deliver the events of the navigation properly.
I read a little about this method and I understand that method will cause the application to handle all the currents events. Now I am a bit concern because I used a cannon to kill an ant, Can someone tell me if what I did is worthwhile?
Yes, Application.DoEvents() solves this problem. The core issue is that WebBrowser is a heavily threaded component at its core. You can call its Navigate() method and it goes off doing its stuff without blocking your code, the method returns almost immediately.
The problem however is that at some point it has to run your DocumentCompleted event. Which is guaranteed to run on the thread on which you created the browser object. That's hard to do, your thread may well be busy doing something else. Like sitting in a loop, testing the ReadyState property. There is no mechanism to interrupt this loop and run the event handler.
So what you see is that the ReadyState property never changes and the DocumentCompleted event never fires. This is called deadlock, a very common curse of threaded code. Using DoEvents is the back-door, that "pumps the message loop". It allows the browser to break into your thread and fire the event. Which in turn updates the ReadyState property and lets you break out of the loop.
There's a Big Problem with DoEvents however. it isn't selective, it doesn't just limit itself to handling the message that allows the event to fire. It also dispatches other notifications, the kind that will crash your program. Like your user getting impatient with the slow web site and closing your form. That destroys the browser object but does not stop your loop. You are now testing the ReadyState property of a disposed browser. Kaboom!
You'll need to do this differently. It is never legal to block or hang up the UI thread in a loop, it is very prone to create deadlock. It is in fact forbidden by Microsoft guidelines for an STA thread. The workaround is simple, move whatever code you now have after the wait loop to the DocumentCompleted event handler. You might need to add some state variables to your class so that you know that the event signals completion of a particular web page or that the user is no longer interested in the result.
The Application.Dovents() method makes all pending messages processed. That can cause:
Entering a code block twice before the current one finishes. (Let's assume that you navigate your browser with a button click. User clicks the button and while your code is waiting browser to copmlete the user clicked again. In that case Application.Doevents() will cause processing that method before stepping next line.)
Interrupting critical codes. (Lets assume that you have a time consuming method and the user clicked close button. Your form will disappear but your code will continue to run. A real problem.
Many more UnExpected results.
However I feel sometimes using this method is necessary and an easy solution like webbrowser which is difficult to use in multithreading (especially when its visible). If you have to use this method you should be sure that user and other things (timers, buttons, events vs) don't interrupt anything.
For a detailed discuss:Use of Application.DoEvents()
Can Application.DoEvents() be used in C#?
Is this function a way to allow the GUI to catch up with the rest of the app, in much the same way that VB6's DoEvents does?
Hmya, the enduring mystique of DoEvents(). There's been an enormous amount of backlash against it, but nobody ever really explains why it is "bad". The same kind of wisdom as "don't mutate a struct". Erm, why does the runtime and the language supports mutating a struct if that's so bad? Same reason: you shoot yourself in the foot if you don't do it right. Easily. And doing it right requires knowing exactly what it does, which in the case of DoEvents() is definitely not easy to grok.
Right off the bat: almost any Windows Forms program actually contains a call to DoEvents(). It is cleverly disguised, however with a different name: ShowDialog(). It is DoEvents() that allows a dialog to be modal without it freezing the rest of the windows in the application.
Most programmers want to use DoEvents to stop their user interface from freezing when they write their own modal loop. It certainly does that; it dispatches Windows messages and gets any paint requests delivered. The problem however is that it isn't selective. It not only dispatches paint messages, it delivers everything else as well.
And there's a set of notifications that cause trouble. They come from about 3 feet in front of the monitor. The user could for example close the main window while the loop that calls DoEvents() is running. That works, user interface is gone. But your code didn't stop, it is still executing the loop. That's bad. Very, very bad.
There's more: The user could click the same menu item or button that causes the same loop to get started. Now you have two nested loops executing DoEvents(), the previous loop is suspended and the new loop is starting from scratch. That could work, but boy the odds are slim. Especially when the nested loop ends and the suspended one resumes, trying to finish a job that was already completed. If that doesn't bomb with an exception then surely the data is scrambled all to hell.
Back to ShowDialog(). It executes DoEvents(), but do note that it does something else. It disables all the windows in the application, other than the dialog. Now that 3-feet problem is solved, the user cannot do anything to mess up the logic. Both the close-the-window and start-the-job-again failure modes are solved. Or to put it another way, there is no way for the user to make your program run code in a different order. It will execute predictably, just like it did when you tested your code. It makes dialogs extremely annoying; who doesn't hate having a dialog active and not being able to copy and paste something from another window? But that's the price.
Which is what it takes to use DoEvents safely in your code. Setting the Enabled property of all your forms to false is a quick and efficient way to avoid problems. Of course, no programmer ever actually likes doing this. And doesn't. Which is why you shouldn't use DoEvents(). You should use threads. Even though they hand you a complete arsenal of ways to shoot your foot in colorful and inscrutable ways. But with the advantage that you only shoot your own foot; it won't (typically) let the user shoot hers.
The next versions of C# and VB.NET will provide a different gun with the new await and async keywords. Inspired in small part by the trouble caused by DoEvents and threads but in large part by WinRT's API design that requires you to keep your UI updated while an asynchronous operation is taking place. Like reading from a file.
It can be, but it's a hack.
See Is DoEvents Evil?.
Direct from the MSDN page that thedev referenced:
Calling this method causes the current
thread to be suspended while all
waiting window messages are processed.
If a message causes an event to be
triggered, then other areas of your
application code may execute. This can
cause your application to exhibit
unexpected behaviors that are
difficult to debug. If you perform
operations or computations that take a
long time, it is often preferable to
perform those operations on a new
thread. For more information about
asynchronous programming, see
Asynchronous Programming Overview.
So Microsoft cautions against its use.
Also, I consider it a hack because its behavior is unpredictable and side effect prone (this comes from experience trying to use DoEvents instead of spinning up a new thread or using background worker).
There is no machismo here - if it worked as a robust solution I would be all over it. However, trying to use DoEvents in .NET has caused me nothing but pain.
Yes, there is a static DoEvents method in the Application class in the System.Windows.Forms namespace. System.Windows.Forms.Application.DoEvents() can be used to process the messages waiting in the queue on the UI thread when performing a long-running task in the UI thread. This has the benefit of making the UI seem more responsive and not "locked up" while a long task is running. However, this is almost always NOT the best way to do things.
According to Microsoft calling DoEvents "...causes the current thread to be suspended while all waiting window messages are processed." If an event is triggered there is a potential for unexpected and intermittent bugs that are difficult to track down. If you have an extensive task it is far better to do it in a separate thread. Running long tasks in a separate thread allows them to be processed without interfering with the UI continuing to run smoothly. Look here for more details.
Here is an example of how to use DoEvents; note that Microsoft also provides a caution against using it.
From my experience I would advise great caution with using DoEvents in .NET. I experienced some very strange results when using DoEvents in a TabControl containing DataGridViews. On the other hand, if all you're dealing with is a small form with a progress bar then it might be OK.
The bottom line is: if you are going to use DoEvents, then you need to test it thoroughly before deploying your application.
Yes.
However, if you need to use Application.DoEvents, this is mostly an indication of a bad application design. Perhaps you'd like to do some work in a separate thread instead?
I saw jheriko's comment above and was initially agreeing that I couldn't find a way to avoid using DoEvents if you end up spinning your main UI thread waiting for a long running asynchronous piece of code on another thread to complete. But from Matthias's answer a simple Refresh of a small panel on my UI can replace the DoEvents (and avoid a nasty side effect).
More detail on my case ...
I was doing the following (as suggested here) to ensure that a progress bar type splash screen (How to display a "loading" overlay...) updated during a long running SQL command:
IAsyncResult asyncResult = sqlCmd.BeginExecuteNonQuery();
while (!asyncResult.IsCompleted) //UI thread needs to Wait for Async SQL command to return
{
System.Threading.Thread.Sleep(10);
Application.DoEvents(); //to make the UI responsive
}
The bad: For me calling DoEvents meant that mouse clicks were sometimes firing on forms behind my splash screen, even if I made it TopMost.
The good/answer: Replace the DoEvents line with a simple Refresh call to a small panel in the centre of my splash screen, FormSplash.Panel1.Refresh(). The UI updates nicely and the DoEvents weirdness others have warned of was gone.
I've seen many commercial applications, using the "DoEvents-Hack". Especially when rendering comes into play, I often see this:
while(running)
{
Render();
Application.DoEvents();
}
They all know about the evil of that method. However, they use the hack, because they don't know any other solution. Here are some approaches taken from a blog post by Tom Miller:
Set your form to have all drawing occur in WmPaint, and do your rendering there. Before the end of the OnPaint method, make sure you do a this.Invalidate(); This will cause the OnPaint method to be fired again immediately.
P/Invoke into the Win32 API and call PeekMessage/TranslateMessage/DispatchMessage. (Doevents actually does something similar, but you can do this without the extra allocations).
Write your own forms class that is a small wrapper around CreateWindowEx, and give yourself complete control over the message loop.
-Decide that the DoEvents method works fine for you and stick with it.
Check out the MSDN Documentation for the Application.DoEvents method.
The DoEvents does allow the user to click around or type and trigger other events, and background threads are a better approach.
However, there are still cases where you may run into issues that require flushing event messages. I ran into a problem where the RichTextBox control was ignoring the ScrollToCaret() method when the control had messages in queue to process.
The following code blocks all user input while executing DoEvents:
using System;
using System.Runtime.InteropServices;
using System.Windows.Forms;
namespace Integrative.Desktop.Common
{
static class NativeMethods
{
#region Block input
[DllImport("user32.dll", EntryPoint = "BlockInput")]
[return: MarshalAs(UnmanagedType.Bool)]
private static extern bool BlockInput([MarshalAs(UnmanagedType.Bool)] bool fBlockIt);
public static void HoldUser()
{
BlockInput(true);
}
public static void ReleaseUser()
{
BlockInput(false);
}
public static void DoEventsBlockingInput()
{
HoldUser();
Application.DoEvents();
ReleaseUser();
}
#endregion
}
}
Application.DoEvents can create problems, if something other than graphics processing is put in the message queue.
It can be useful for updating progress bars and notifying the user of progress in something like MainForm construction and loading, if that takes a while.
In a recent application I've made, I used DoEvents to update some labels on a Loading Screen every time a block of code is executed in the constructor of my MainForm. The UI thread was, in this case, occupied with sending an email on a SMTP server that didn't support SendAsync() calls. I could probably have created a different thread with Begin() and End() methods and called a Send() from their, but that method is error-prone and I would prefer the Main Form of my application not throwing exceptions during construction.
I want to paralelize a 3D voxel editor built on top of Windows Forms, it uses a raycaster to render so dividing the screen and getting each thread on a pool to render a part of it should be trivial.
The problem arises in that Windows Forms' thread must run as STA - I can get other threads to start and do the work but blocking the main thread while waiting for them to finish causes strange random deadlocks as expected.
Keeping the main thread unblocked would also be a problem - if, for example, the user uses a floodfill tool the input would be processed during the rendering process which would cause "in-between" images (an object partially colored, for example). Copying the entire image before every frame isn't doable either because the volumes are big enough to offset any performance gain if it has to be copied every frame.
I want to know if there is any workaround to get the amin thread to appear blocked to the user in a way that it will not be actually blocked but will delay the processing of input till the next frame.
If it isn't possible, is there a better design for dealing with this?
EDIT: Reading the anwsers I think I wasn't clear that the raycaster runs in real time, so showing progress dialogs won't work at all. Unfortunately the FPS is low enough (5-40 depending on various factors) for the input between frames to produce unwanted results.
I have already tried to implement it blocking the UI thread and using some threads of a ThreadPool to process and it works fine except for this problem with STA.
This is a common problem. With windows forms you can have only one UI thread. Don't run your algorithm on the UI thread because then the UI will appear frozen.
I recommend running your algorithm and waiting for it to finish before updating the UI. A class called BackgroundWorker comes pre-built to do just this very thing.
Edit:
Another fact about the UI thread is that it handles all of the mouse and keyboard events, along with system messages that are sent to the window. (Winforms is really just Win32 surrounded by a nice API.) You cannot have a stable application if the UI thread is saturated.
On the other hand, if you start several other threads and try to draw directly on the screen with them, you may have two problems:
You're not supposed to draw on the UI with any thread but the UI thread. Windows controls are not thread safe.
If you have a lot of threads, context switching between them may kill your performance.
Note that you (and I) shouldn't claim a performance problem until it has been measured. You could try drawing a frame in memory and swapping it in at an appropriate time. Its called double-buffering and is very common in Win32 drawing code to avoid screen flicker.
I honestly don't know if this is feasible with your target frame rate, or if you should consider a more graphics-centered library like OpenGL.
Am I missing something or can you just set your render control (and any other controls that generate input events) to disabled while you're rendering a frame? That will prevent unwanted inputs.
If you still want to accept events while you're rendering but don't want to apply them until the next frame, you should leave your controls enabled and post the detail of the event to an input queue. That queue should then be processed at the start of every frame.
This has the affect that the user can still click buttons and interact with the UI (the GUI thread does not block) and those events are not visible to the renderer until the start of the next frame. At 5 FPS, the user should see their events are processed within 400ms worst case (2 frames), which isn't quite fast enough, but better than threading deadlocks.
Perhaps something like this:
Public InputQueue<InputEvent> = new Queue<InputEvent>();
// An input event handler.
private void btnDoSomething_Click(object sender, EventArgs e)
{
lock(InputQueue)
{
InputQueue.Enqueue(new DoSomethingInputEvent());
}
}
// Your render method (executing in a background thread).
private void RenderNextFrame()
{
Queue<InputEvent> inputEvents = new Queue<InputEvent>();
lock(InputQueue)
{
inputEvents.Enqueue(InputQueue.Dequeue());
}
// Process your input events from the local inputEvents queue.
....
// Now do your render based on those events.
....
}
Oh, and do your rendering on a background thread. Your UI thread is precious, it should only do the most trivial work. Matt Brundell's suggestion of BackgroundWorker has lots of merit. If it doesn't do what you want, the ThreadPool is also useful (and simpler). More powerful (and complex) alternatives are the CCR or the Task Parallel Library.
Show a modal "Please Wait" dialog using ShowDialog, then close it once your rendering is finished.
This will prevent the user from interacting with the form while still allowing you to Invoke to the UI thread (which is presumably your problem).
If you don't want all the features offered by the BackgroundWorker you can simply use the ThreadPool.QueueUserWorkItem to add something to the thread pool and use a background thread. It would be easy to show some kind of progress while the background thread was performing it's operations as you can provide a delegate callback to notify you whenever a particular background thread is done. Take a look at ThreadPool.QueueUserWorkItem Method (WaitCallback, Object) to see what I'm referring you to. If you need something more complex you could always use the APM async method to perform your operations as well.
Either way I hope this helps.
EDIT:
Notify user somehow that changes are being made to the UI.
On a(many) background threads using the ThreadPool perform the ops you need to perform to the UI.
For each operation keep a reference to the state for the operation so that you know when it completed in the WaitCallback. Maybe put them in some type of hash / collection to keep ref to them.
Whenever an operation completes remove it from the collection that contains a ref to the ops that were performed.
Once all operations have completed (hash / collection) has no more references in it render the UI with the changes applied. Or possibly incrementally update the UI
I'm thinking that if you are making so many updates to the UI while you are performing your operations that is what is causing your problems. That's also why I recommended the use of SuspendLayout, PerformLayout as you may have been performing so many updates to the UI the main thread was getting overwhelmed.
I am no expert on threading though, just trying to think it through myself. Hope this helps.
Copying the entire image before every frame isn't doable either because the volumes are big enough to offset any performance gain if it has to be copied every frame.
Then don't copy the off-screen buffer on every frame.