Does the rendering performance reduce seriously in case when WPF application's XAML contains a lot of nested Grid, StackPanel, DockPanel and other containers?
Really the answer is simply "yes". More of anything will use more processor time. SURPRISE!
In the case of WPF, elements are arranged into a hierarchical scene graph. Adding levels of depth to this graph will slow your application more than adding siblings to existing elements. You should always strive to keep the depth the graph low. Consider using Grid instead of nesting StackPanels.
So why is depth more important than raw element count? Well, depth generally implies;
layout dependency - if a parent is re-sized a child is likely to be re-rendered.
occlusion - if 2 elements overlap, invalidating one will often invalidate the other.
recursion - most graph operations are CPU bound - they depend entirely on CPU speed and have no dedicated hardware support (the renderer uses your graphics chip where possible). Cycling through levels of the graph for resources and layout updates is expensive.
Concerning occlusion, the BitmapCache class can help greatly!
When you create a very complex UI, with lots of nested objects and DataTemplate with lot of elements, you can impact seriously the performance of the App, because the bigger the UI Tree, the bigger it will take to render, and if the framework cannot render in 30FPS you will start to see performance drops. You should use the most lightweight panels you need in order to avoid extra logic you wonn't need. Here are some performance tips in order to make you App faster:
http://msdn.microsoft.com/en-us/library/bb613542(v=vs.110).aspx
WPF uses MeasureOverride and ArrangeOverride methods inorder to render UIElements. MeasureOverride measure the UIElements width and size based on the Parent controls Width and Size. ArrangeOVerride method will arrange the UIElements at runtime based on these measures. These methods are optimized for faster performance and should not cause any rendering performance issue.
But there should be a capacity where these methods can handle UIElements within a minimal time. If this limit exceeds then there should be performance issue.
eg: Suppose a bike can carry 2 person. if 5 persons overloaded what will happen :)
Jet Brains .Trace is a tool to analyze the performance issue which will helps to see these two methods
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
We have a product (program) written in C# + WPF + XAML in our company. This is a fairly important program for us and it has been installed by many of our customers. But after switching to Framework 4.7.2, we noticed a strong performance degradation. We then changed to version 4.8, but the program still works quite slowly, especially the visual part.
In our program, we display data from a very large number of sensors (motion sensors, temperature, amount of light, etc.). We are constantly receiving new data that we process, save to the SQL Server 2014/2017 database, and then the client programs visualize this data.
The server part and communication, although complex, even works well on a not very powerful computer. But we have a very big problem with showing data on customer monitors.
The program is structured as follows: the client draws where he wants to see this data. For example, he has a black background and with lines he draws his factory. There are sensors in different places in this factory, he draws them in these places. Then he starts the scan and sees the result of the data where he drew the label. When there is little data, it is not very noticeable, but when there is a lot of data, moving the mouse between the data becomes inhibited, the client sees that the program is constantly slowing down and he needs to wait a few seconds, make some movement with the mouse and then again wait for the program to respond. If you do several things at once, then the program as if freezes. This is not so, but the feeling is that the program will now stop working.
I tried to use Debug and measurements of CPU and RAM but it practically did not help me in any way. Data is downloaded via web services from the server to the client program normally and takes up as much memory as needed. It’s hard to optimize it somehow. But when we start showing this data to the user, everything starts to work very badly. How to optimize data visualization so that the user continues to work calmly with the program? Will be glad to any advice.
What I did, which helped a bit to improve this virtualization in DataGrid tables. A little better for users, but this is not enough, you need something else, especially with the part that is drawn and shows labels with data in different places.
In my experience wpf is not suited for visualising large amount of data. It's fine for creating a fancy UI, but as the number of objects increase the performance drops dramatically. I tried everything from caching to freezing objects, and I concluded that I just chose the wrong technology. It doesn't utilise your GPU properly.
You can try converting to UWP, It might help.
Having that said, here are some tips you can also try:
Simplify your Visual Tree A common source of performance issues is a deep and complex layout. Keep your XAML markup as simple and shallow
as possible. When UI elements are drawn onscreen, a “layout pass” is
called twice for each element (a measure pass and an arrange pass).
The layout pass is a mathematically-intensive process—the larger the
number of children in the element, the greater the number of
calculations required.
Virtualize your ItemsControls As mentioned earlier, a complex and deep visual tree results in a larger memory footprint and slower
performance. ItemsControls usually increase performance problems with
deep visual trees because they are not virtualized. This means they
are constantly being created and destroyed for each item in the
control. Instead, use the VirtualizingStackPanel as the items host and
make use of the VirtualizingStackPanel.IsVirtualizing and set the
VirtualizationMode to Recycling in order to reuse item containers
instead of creating new ones each time.
Favor StaticResources Over DynamicResources StaticResources provide values for any XAML property attribute by looking up a reference to an
already defined resource. Lookup behavior for that resource is the
same as a compile-time lookup. DynamicResources will create a
temporary expression and defer lookup for resources until the
requested resource value is required. Lookup behavior for that
resource is the same as a run-time lookup, which imposes a performance
impact. Always use a StaticResource whenever possible.
Opacity on Brushes Instead of Elements If you use a Brush to set the Fill or Stroke of an element, it is better to set the Opacity on
the Brush rather than setting the element’s Opacity property. When you
modify an element’s Opacity property, it can cause WPF to create
temporary surfaces which results in a performance hit.
Avoid Using Run to Set Text Properties Avoid using Runs within a TextBlock as this results in a much higher performance intensive
operation. If you are using a Run to set text properties, set those
directly on the TextBlock instead.
Favor StreamGeometries over PathGeometries The StreamGeometry object is a very lightweight alternative to a PathGeometry.
StreamGeometry is optimized for handling many PathGeometry objects. It
consumes less memory and performs much better when compared to using
many PathGeometry objects.
Use Reduced Image Sizes If your app requires the display of smaller thumbnails, consider creating reduced-sized versions of your images.
By default, WPF will load and decode your image to its full size. This
can be the source of many performance problems if you are loading full
images and scaling them down to thumbnail sizes in controls such as an
ItemsControl. If possible, combine all images into a single image,
such as a film strip composed of multiple images.
Lower the BitMapScalingMode By default, WPF uses a high-quality image re-sampling algorithm that can sometimes consume system
resources which results in frame rate degradation and causes
animations to stutter. Instead, set the BitMapScalingMode to
LowQuality to switch from a “quality-optimized” algorithm to a
“speed-optimized” algorithm.
Use and Freeze Freezables A Freezable is a special type of object that has two states: unfrozen and frozen. When you freeze an object
such as a Brush or Geometry, it can no longer be modified. Freezing
objects whenever possible improves the performance of your application
and reduces its memory consumption.
Fix your Binding Errors Binding errors are the most common type of performance problem in WPF apps. Every time a binding error occurs,
your app takes a perf hit and as it tries to resolve the binding and
writes the error out to the trace log. As you can imagine, the more
binding errors you have the bigger the performance hit your app will
take. Take the time to find and fix all your binding errors. Using a
RelativeSource binding in DataTemplates is a major culprit in binding
error as the binding is usually not resolved properly until the
DataTempate has completed its initialization. Avoid using
RelativeSource.FindAncestor at all costs. Instead, define an attached
property and use property inheritance to push values down the visual
tree instead of looking up the visual tree.
Avoid Databinding to the Label.Content Property If you are using a Label to data bind to a String property, this will result in poor
performance. This is because each time the String source is updated,
the old string object is discarded, and a new String is created. If
the Content of the Label is simple text, replace it with a TextBlock
and bind to the Text property instead.
Bind ItemsControls to IList instead of IEnumerable When data binding an ItemsControl to an IEnumerable, WPF will create a wrapper
of type IList which negatively impacts performance with the
creation of a second object. Instead, bind the ItemsControl directly
to an IList to avoid the overhead of the wrapper object.
Use the NeutralResourcesLanguage Attribute Use the NeutralResourcesLanguageAttribute to tell the ResourceManager what the
neutral culture is and avoid unsuccessful satellite assembly lookups.
Load Data on Separate Threads A very common source of performance problems, UI freezes, and apps that stop responding is how you load
your data. Make sure you are asynchronously loading your data on a
separate thread as to not overload the UI thread. Loading data on the
UI thread will result in very poor performance and an overall bad
end-user experience. Multi-threading should be something every WPF
developer is using in their applications.
Beware of Memory Leaks Memory leaks are the number one cause of performance problems in most WPF applications. They are easy to have
but can be difficult to find. For example, using the
DependencyPropertyDescriptor.AddValueChanged can cause the WPF
framework to take a strong reference to the source of the event that
isn’t removed until you manually call
DependencyPropertyDescriptor.RemoveValueChanged. If your views or
behaviors rely on events being raised from an object or ViewModel
(such as INotifyPropertyChanged), subscribe to them weakly or make
sure you are manually unsubscribing. Also, if you are binding to
properties in a ViewModel which does not implement
INotifyPropertyChanged, chances are you have a memory leak.
Finally, a bonus tip. Sometimes when you have a performance problem it
can be very difficult to identify what exactly is causing the issue. I
suggest using an application performance profiler to help identify
where these performance bottlenecks are occurring in your code base.
There are a lot of profiler options available to you. Some are paid,
and some are free. The one I personally use the most is the Diagnosis
Tools built directly into Visual Studio 2019.
Blockquote
Source: https://dzone.com/articles/15-wpf-performance-tips-for-2019
I am creating a WPF mapping program which will potentially load and draw hundreds of files to the screen at any one time, and a user may want to zoom and pan this display. Some of these file types may contain thousands of points, which would most likely be connected as some kind of path. Other supported formats will include TIFF files.
Is it better for performance to have a single DrawingVisual to which all data is drawn, or should I be creating a new DrawingVisual for each file loaded?
If anyone can offer any advice on this it would be much appreciated.
You will find lots of related questions on Stack Overflow, however not all of them mention that one of the most high-performance ways to draw large amounts of data to the screen is to use the WriteableBitmap API. I suggest taking a look at the WriteableBitmapEx open source project on codeplex. Disclosure, I have contributed to this once, but it is not my library.
Having experimented with DrawingVisual, StreamGeometry, OnRender, Canvas, all these fall over once you have to draw 1,000+ or more "objects" to the screen. There are techniques that deal with the virtualization of a canvas (there' a million items demo with Virtualized Canvas) but even this is limited to the ~1000 visible at one time before slow down. WriteableBitmap allows you to access a bitmap directly and draw on that (oldskool style) meaning you can draw tens of thousands of objects at speed. You are free to implement your own optimisations (multi-threading, level of detail) but do note you don't get much frills with that API. You literally are doing the work yourself.
There is one caveat though. While WPF uses the CPU for tesselation / GPU for rendering, WriteableBitmap will use CPU for everything. Therefore the fill-rate (number of pixels rendered per frame) becomes the bottleneck depending on your CPU power.
Failing that if you really need high-performance rendering, I'd suggest taking a look at SharpDX (Managed DirectX) and the interop with WPF. This will give you the highest performance as it will directly use the GPU.
Using many small DrawingVisuals with few details rendered per visual gave better performance in my experience compared to less DrawingVisuals with more details rendered per visual. I also found that deleting all of the visuals and rendering new visuals was faster than reusing existing visuals when a redraw was required. Breaking each map into a number of visuals may help performance.
As with anything performance related, conducting timing tests with your own scenarios is the best way to be sure.
I'm working on a project in which we need to summarize a substantial amount of data in the form of a heat map. This data will be kept in a database for as long as possible. At some point, we will need to store a summary in a matrix (possibly?) before we can draw the blocks for the heat map to the screen. We are creating a windows form application with C#.
Let's assume the heat map is going to summarize a log file for an online mapping program such as google maps. It will assign a color to a particular address or region based on the number of times a request was made to that region/address. It can summarize the data at differing levels of detail. That is, each block on the heat map can summarize data for a particular address (max detail, therefore billions/millions of blocks) or it can summarize for requests to a street, city, or country (minimum detail -- few blocks as they each represent a country). Imagine that millions of requests were made for addresses. We have considered summarizing this with a database. The problem is that we need to draw so many blocks to the screen (up to billions, but usually much less). Let's assume this data is summarized in a database table that stores the number of hits to the larger regions. Can we draw the blocks to the window without constructing an object for each region or even bringing in all of the information from the db table? That's my primary concern, because if we did construct a matrix, it could be around 10 GB for a demanding request.
I'm curious to know how many blocks we can draw to the screen and what the best approach to this may be (i.e. direct3d, XNA). From above, you can see the range will vary substantially and we expect the potential for billions of squares that need to be drawn. We will have a vertical scroll bar to scroll down quickly to see other blocks.
Overall, I'm wondering how we might accomplish this with C#? Creating the matrix for the demanding request could require around 10 Gigabytes. Is there a way to draw to the screen that will not require a substantial amount of memory (i.e. creating an object for each block). If we could have the results of a SQL query be translated directly into rendered blocks on the screen, that would be ideal (i.e. not constructing objects, etc etc). All we need are squares and their only property is color and we might need to maintain a number for each block.
Note:
We are pretty sure about how we will draw the heat map (how zooming, scrolling, etc should appear to user). To clarify, I'm more concerned about how we will implement our idea. Is there a library or some method that allows us to draw this many objects without constructing a billion objects and using Gigabytes of data. Each block is essentially a group of pixels (20x20) that are the same color. I don't believe this should necessitate constructing 1 billion objects.
Thanks!
If this is really for a graphic heat map, then I agree with the comments that an image that's at least 780 laptop screens wide is impractical. If you have this information in a SQL(?) database somewhere, then you can do a fancy query that partitions your results into buckets of a certain widths. The database should be able to aggregate these records into 1680 (pixels wide) buckets efficiently.
Furthermore, if your buckets are of a fixed width (yielding a fixed width heat-map image) you could pre-generate the bucket numbers for the "addresses" in your database. Indexed properly, grouping by this would be very fast.
If you DO need to see a 1:1 image, you might consider only rendering a section of the image that you're scrolled to. This would significantly reduce the amount of memory necessary to store the current view. Assuming you don't need to actually view all 780 screens worth of data at 100% (especially if you couple this with the "big picture view" strategy above) then you'll save on processing too.
The aggregate function for the "big picture view" might be MAX, SUM, AVG. If these functions aren't appropriate, please explain more about the particular features you'd be looking for in the heat-map.
As far as the drawing itself, you don't need "objects" for each box, you just need to draw the pixels on a graphics object.
I think technique you are looking for is called "virtualization". Now I don't mean hardware virtualization, but technique, where you create concrete visual object only for items, that are visible. Many grids and lists use this technique to show thousands of hundreds of items at normal speeds and memory consumption. You can also reuse those visual objects while swaping concrete data objects.
I would also question necesity of displaying bilions of details. You should make it similiar to zooming or agregation of data to show only few items and then let the user choose specific part or pice of data. But I guess you have that thought out.
Does the number of controls on a form affect its performance? What if the controls are marked invisible? What if the several controls are visible, but entirely covered by only a few controls (like a panel containing a couple of controls)?
I'm asking this from a perspective of applications like 3d modeling packages, video editing software, etc. They've got hidden panels, tabs, rollouts, animated drawers and what not.
Has anyone done any such performance tests? Is considering this worthwhile?
Yes. Outside of the drawing, each control uses it's own window handle just by initializing it. So even invisible or hidden, it will affect performance.
The type of control makes a difference too. 3rd party or custom controls will sometimes be composed of multiple controls, each having it's own handle.
Usually the up front consideration for the amount of controls is done in the usability context and that generally should help avoid performance issues.
Without doing any performance test it's easy to say that too many controls has performance issue,
The memory usage increased (UI objects are very huge).
OnPaint and other message base
methods will be called (for control
or for parent in inheritance
hierarchy)
I need to build a high performance winforms data grid using Visual Studio 2005, and I'm at a loss with where to start. I've build plenty of data grid applications, but none of those were very good when the data was constantly refreshing.
The grid is going to be roughly 100 rows by 40 columns, and each cell in the grid is going to update between 1 and 2 times a second(some cells possibly more). To me, this is the biggest drawback of the out of the box data grid, the repainting isn't very efficient.
Couple caveats
1) No third party vendors. This grid is backbone of all our applications, so while XCeed or Syncfusion or whatever might get us up and running faster, we'd slam into its limitations and be hosed. I'd rather put in the extra work up front, and have a grid that does exactly what we need.
2) I have access to Visual Studio 2008, so if it would be much better to start this in 2008, then I can do that. If its a tossup, I'd like to stick with 2005.
So whats the best approach here?
I would recommend the following approach if you have many cells that are updating at different rates. Rather than try to invalidate each cell each time the value changes you would be better off by limiting the refresh rate.
Have a timer that fires at a predefined rate, such as 4 times per second, and then each time it fires you repaint the cells that have changed since the last time around. You can then tweak the update rate in order to find the best compromise between performance and usability with some simple testing.
This has the advantage of not trying to update too often and so killing your CPU performance. It batches up changes between each refresh cycle and so two quick changes to a value that occur fractions of a second apart do not cause two refreshes when only the latest value is actually worth drawing.
Note this delayed drawing only applies to the rapid updates in value and does not apply to general drawing such as when the user moves the scroll bar. In that case you should draw as fast as the scroll events occur to give a nice smooth experience.
We use the Syncfusion grid control and from what I've seen it's pretty flexible if you take the time to modify it. I don't work with the control myself, one of my co-workers does all of the grid work but we've extended it to our needs pretty well including custom painting.
I know this isn't exactly answering your question, but it writing a control like this from scratch is going always going to be much more complicated than you anticipate, regardless of your anticipations. Since it'll be constantly updating I assume it's going to be databound which will be a chore in itself, especially to get it to be highly performant. Then there's debugging it.
Try the grid from DevExpress or ComponentOne. I know from experience that the built-in grids are never going to be fast enough for anything but the most trivial of applications.
I am planning to build a grid control to do the same as pass time, but still haven't got time. Most of the commercial grid controls have big memory foot print and update is typically an issue.
My tips would be (if you go custom control)
1. Extend a Control (not UserControl or something similar). It will give you speed, without losing much.
2. In my case I was targeting the grid to contain more data. Say a million row with some 20-100 odd columns. In such scenarios it usually makes more sense to draw it yourself. Do not try to represent each cell by some Control (like say Label, TextBox, etc). They eat up a lot of resources (window handles, memory, etc).
3. Go MVC.
The idea is simple: At any given time, you can display limited amount of data, due to screen size limitations, Human eye limitation, etc
So your viewport is very small even if you have gazillion rows and columns and the number of updates you have to do are no more than 5 per second to be any useful to read even if the data behind the grid id being updated gazillion times per second. Also remember even if the text/image to be displayed per cell is huge, the user is still limited by the cell size.
Caching styles (generic word to represent textsizes, fonts, Colors etc), also help in such scenario depending on how many of them you will be using in your grid.
There will be lot more work in getting some basic drawing (highlights, grid, boundaries, borders, etc) done to get various effects.
I don't recall exactly, but there was a c# .net grid on sourceforge, which can give you a good idea of how to start. That grid offered 2 options, VirtualGrid where the model data is not held by the grid making it very lightweight, and a Real grid (traditional) where the data storage is owned by the grid itself (mostly creating a duplicate, but depends on the application)
For a super-agile (in terms of updates), it might just be better to have a "VirtualGrid"
Just my thoughts