Suppose I've got something like following
bunch of Data classes
bunch of List of most of the data classes
loads of List<List<List<Data>>> classes to represent at least 3D arrays
a good few App.MyViewModel view models used in different pages due to thread access
the viewmodels are quite complex to my liking with tonnes of properties linking back to point 3
in the end, each ListView template is created from ObservableCollection<String> generated from one of the List<Data>
During the lifecycle, those lists might be renewed many times, which I would hope should recycle previous used memory? The list view rows/cells are created as Grids.
On small list views of up to tens of rows it works good and fast, not increasing memory use too much.
However, on large data sets, containing thousands of rows, even scrolling the ListView sometimes just crashes the app and memory increases dramatically with each portion of data.
So the question really Is, from your own experience, what would you recommend in troubleshooting and perhaps redesigning of the approach?
You should really look at the Xamarin Profiler
Xamarin Profiler
The Xamarin Profiler has a number of instruments available for
profiling Allocations, Cycles, and Time Profiler
There could be so many problems its impossible to know where to start, as for design once again its hard to know how to refactor your app because we don't know what you are trying to achieve. If you need to use lists you need to use them and there isn't much you can do about it.
However, you need to start from first principles and make sure you are doing only what you need to do, instantiating only what you need to instantiate, keeping your xaml and UI with the minimal amount of cyclic calculations as possible. Last of all making sure your view models and objects are going out of scope and being garbage collected
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
We have a product (program) written in C# + WPF + XAML in our company. This is a fairly important program for us and it has been installed by many of our customers. But after switching to Framework 4.7.2, we noticed a strong performance degradation. We then changed to version 4.8, but the program still works quite slowly, especially the visual part.
In our program, we display data from a very large number of sensors (motion sensors, temperature, amount of light, etc.). We are constantly receiving new data that we process, save to the SQL Server 2014/2017 database, and then the client programs visualize this data.
The server part and communication, although complex, even works well on a not very powerful computer. But we have a very big problem with showing data on customer monitors.
The program is structured as follows: the client draws where he wants to see this data. For example, he has a black background and with lines he draws his factory. There are sensors in different places in this factory, he draws them in these places. Then he starts the scan and sees the result of the data where he drew the label. When there is little data, it is not very noticeable, but when there is a lot of data, moving the mouse between the data becomes inhibited, the client sees that the program is constantly slowing down and he needs to wait a few seconds, make some movement with the mouse and then again wait for the program to respond. If you do several things at once, then the program as if freezes. This is not so, but the feeling is that the program will now stop working.
I tried to use Debug and measurements of CPU and RAM but it practically did not help me in any way. Data is downloaded via web services from the server to the client program normally and takes up as much memory as needed. It’s hard to optimize it somehow. But when we start showing this data to the user, everything starts to work very badly. How to optimize data visualization so that the user continues to work calmly with the program? Will be glad to any advice.
What I did, which helped a bit to improve this virtualization in DataGrid tables. A little better for users, but this is not enough, you need something else, especially with the part that is drawn and shows labels with data in different places.
In my experience wpf is not suited for visualising large amount of data. It's fine for creating a fancy UI, but as the number of objects increase the performance drops dramatically. I tried everything from caching to freezing objects, and I concluded that I just chose the wrong technology. It doesn't utilise your GPU properly.
You can try converting to UWP, It might help.
Having that said, here are some tips you can also try:
Simplify your Visual Tree A common source of performance issues is a deep and complex layout. Keep your XAML markup as simple and shallow
as possible. When UI elements are drawn onscreen, a “layout pass” is
called twice for each element (a measure pass and an arrange pass).
The layout pass is a mathematically-intensive process—the larger the
number of children in the element, the greater the number of
calculations required.
Virtualize your ItemsControls As mentioned earlier, a complex and deep visual tree results in a larger memory footprint and slower
performance. ItemsControls usually increase performance problems with
deep visual trees because they are not virtualized. This means they
are constantly being created and destroyed for each item in the
control. Instead, use the VirtualizingStackPanel as the items host and
make use of the VirtualizingStackPanel.IsVirtualizing and set the
VirtualizationMode to Recycling in order to reuse item containers
instead of creating new ones each time.
Favor StaticResources Over DynamicResources StaticResources provide values for any XAML property attribute by looking up a reference to an
already defined resource. Lookup behavior for that resource is the
same as a compile-time lookup. DynamicResources will create a
temporary expression and defer lookup for resources until the
requested resource value is required. Lookup behavior for that
resource is the same as a run-time lookup, which imposes a performance
impact. Always use a StaticResource whenever possible.
Opacity on Brushes Instead of Elements If you use a Brush to set the Fill or Stroke of an element, it is better to set the Opacity on
the Brush rather than setting the element’s Opacity property. When you
modify an element’s Opacity property, it can cause WPF to create
temporary surfaces which results in a performance hit.
Avoid Using Run to Set Text Properties Avoid using Runs within a TextBlock as this results in a much higher performance intensive
operation. If you are using a Run to set text properties, set those
directly on the TextBlock instead.
Favor StreamGeometries over PathGeometries The StreamGeometry object is a very lightweight alternative to a PathGeometry.
StreamGeometry is optimized for handling many PathGeometry objects. It
consumes less memory and performs much better when compared to using
many PathGeometry objects.
Use Reduced Image Sizes If your app requires the display of smaller thumbnails, consider creating reduced-sized versions of your images.
By default, WPF will load and decode your image to its full size. This
can be the source of many performance problems if you are loading full
images and scaling them down to thumbnail sizes in controls such as an
ItemsControl. If possible, combine all images into a single image,
such as a film strip composed of multiple images.
Lower the BitMapScalingMode By default, WPF uses a high-quality image re-sampling algorithm that can sometimes consume system
resources which results in frame rate degradation and causes
animations to stutter. Instead, set the BitMapScalingMode to
LowQuality to switch from a “quality-optimized” algorithm to a
“speed-optimized” algorithm.
Use and Freeze Freezables A Freezable is a special type of object that has two states: unfrozen and frozen. When you freeze an object
such as a Brush or Geometry, it can no longer be modified. Freezing
objects whenever possible improves the performance of your application
and reduces its memory consumption.
Fix your Binding Errors Binding errors are the most common type of performance problem in WPF apps. Every time a binding error occurs,
your app takes a perf hit and as it tries to resolve the binding and
writes the error out to the trace log. As you can imagine, the more
binding errors you have the bigger the performance hit your app will
take. Take the time to find and fix all your binding errors. Using a
RelativeSource binding in DataTemplates is a major culprit in binding
error as the binding is usually not resolved properly until the
DataTempate has completed its initialization. Avoid using
RelativeSource.FindAncestor at all costs. Instead, define an attached
property and use property inheritance to push values down the visual
tree instead of looking up the visual tree.
Avoid Databinding to the Label.Content Property If you are using a Label to data bind to a String property, this will result in poor
performance. This is because each time the String source is updated,
the old string object is discarded, and a new String is created. If
the Content of the Label is simple text, replace it with a TextBlock
and bind to the Text property instead.
Bind ItemsControls to IList instead of IEnumerable When data binding an ItemsControl to an IEnumerable, WPF will create a wrapper
of type IList which negatively impacts performance with the
creation of a second object. Instead, bind the ItemsControl directly
to an IList to avoid the overhead of the wrapper object.
Use the NeutralResourcesLanguage Attribute Use the NeutralResourcesLanguageAttribute to tell the ResourceManager what the
neutral culture is and avoid unsuccessful satellite assembly lookups.
Load Data on Separate Threads A very common source of performance problems, UI freezes, and apps that stop responding is how you load
your data. Make sure you are asynchronously loading your data on a
separate thread as to not overload the UI thread. Loading data on the
UI thread will result in very poor performance and an overall bad
end-user experience. Multi-threading should be something every WPF
developer is using in their applications.
Beware of Memory Leaks Memory leaks are the number one cause of performance problems in most WPF applications. They are easy to have
but can be difficult to find. For example, using the
DependencyPropertyDescriptor.AddValueChanged can cause the WPF
framework to take a strong reference to the source of the event that
isn’t removed until you manually call
DependencyPropertyDescriptor.RemoveValueChanged. If your views or
behaviors rely on events being raised from an object or ViewModel
(such as INotifyPropertyChanged), subscribe to them weakly or make
sure you are manually unsubscribing. Also, if you are binding to
properties in a ViewModel which does not implement
INotifyPropertyChanged, chances are you have a memory leak.
Finally, a bonus tip. Sometimes when you have a performance problem it
can be very difficult to identify what exactly is causing the issue. I
suggest using an application performance profiler to help identify
where these performance bottlenecks are occurring in your code base.
There are a lot of profiler options available to you. Some are paid,
and some are free. The one I personally use the most is the Diagnosis
Tools built directly into Visual Studio 2019.
Blockquote
Source: https://dzone.com/articles/15-wpf-performance-tips-for-2019
I've read the other questions that come up when I ask about memory snapshots, but I might be too thick to really grasp it. I have a windows service that I can produce a memory leak in by doing a pretty straightforward data operation repeatedly. I've taken memory snapshots along the way, and I see that the number of roots is going up (from 2,100 after a successful start to 7,100 after 100 or so data operations). The snapshots were taken at the blue arrow marks:
Before the multiple data operations, the memory snapshot looks like this:
Afterwards, it looks like this:
We're using WCF for data transport and it would appear that Serialization is playing a part in this memory growth, but I don't know where to go from here. If I look at instances of RuntimeType+RuntimeTypeCache, the vast majority of instances look like this:
If anyone can help me figure out the next step to take, I would appreciate it immensely. We have a static instance that has a concurrent dictionary of ServiceHosts that I'm suspicious of, but I don't know how to confirm it.
EDIT:
This also seems significant and is in reference to ServiceHosts. Could we be enabling some unwise proxy generation and instance retention via this static relationship?
Sort your items by Size, and inside that list, watch out for your own class types. Which one is piling up. Have at least a total of a few Megabytes of objects, to be sure to see a real 'Pile' not just some parts of the infrastructure.
The 12.000 existing runtime types, might indicate dynamically created types, maybe the serialization DLL is created for each new call.
You can also try to do GC.Collect() after your critical function call, to enforce Garbage collection.
I have an application and I have around 20+ pages and am creating all pages at the starting of the application. It might be a memory over flow exception in future.? Whether it is a better Idea or can pages as I need.
If those pages are created only once, then the memory usage for them wont change. Any objects you create on those pages will cause the memory consumption to increase.
As for your question, creating the pages at the start of the application should be fine, just beware that you will have to create them in such a way that the Garbage Collection will not clear them out of memory. Also make sure you don't create new instances of them each time they are displayed :)
Background:
I have a service whose purpose in life is to provide objects to requestors - it basically gets complicated data from a database and transforms it once (a bit like a view over data) to produce a simplified record. This then services requests from other services by providing up to 100k records (depending on the nature of the request) on demand.
The idea is that the complicated transformation is done once and is cached by the service - it works out quicker than letting the database work it out each time a view is accessed and for my purposes works just fine. (I believe this is called SSOS by some)
The way data is being cached is in a list of objects which are property bags for standard .Net types. These objects have no references to anything else.
Periodically a record will change, and the cache must be updated which means that the original record must be located, thrown away and replaced.
Now the record in the cache will have been in there for a long time and will have been marked for a Gen 2 collection; pretty much all the collections will happen in the Gen2 phase as these objects are hanging around for ages (on purpose).
So my understanding of Gen2 collections is that they are slow, and if the collections are mainly working on Gen2 then the optimizer is going to do this more often.
I would like to be able to de-reference an object in the list in a way that doesn't end up triggering a full Gen2 collection... I was thinking that maybe there is a way of marking it as Gen0 and then de-referencing it before replacing it - but I don't think that is possible.
I am constrained to using .Net 4 for this and the application is a service which serves data to up to 100 clients who request full lists or changes to the list over a period of time.
Question: Can anyone suggest a way to de-reference long lived objects in a GC friendly way or perhaps another way to approach this problem?
There is no simple answer to this. If you have lots of long-lived objects, then full collections really can hurt, as I discussed here. Since a picture tells a thousand words:
Those vertical spikes are where garbage collection happens and slaughters the response times.
The way we reduced the impact of this was: don't have a gazillion long-lived objects. What we did was to change the classes to structs, which meant that the only object was the array that contained them. We were fortunate here is that the data was simple and didn't involve strings, which would of course themselves be objects. We also did some crazy fixed-size buffer work to reduce things that were previously collections, and changed what were references to indices (into the array). If you do have to use string data, perhaps try to ensure you don't have 20,000 different string instancs with the same value - some kind of manual interner (a Dictionary<string,string> would suffice) can be really useful there.
Note that this needn't impact your public API, since you can always create the old class data from the struct storage - the difference is that this class will only exist briefly as a DTO - so will be collected cheaply in the next gen-0 sweep.
YMMV, but this worked enough well for us.
The problem is: you need to be really careful when working with structs; I strongly advise making them immutable.
I'm an experienced programmer in a legacy (yet object oriented) development tool and making the switch to C#/.Net. I'm writing a small single user app using SQL server CE 3.5. I've read the conceptual DataSet and related doc and my code works.
Now I want to make sure that I'm doing it "right", get some feedback from experienced .Net/SQL Server coders, the kind you don't get from reading the doc.
I've noticed that I have code like this in a few places:
var myTableDataTable = new MyDataSet.MyTableDataTable();
myTableTableAdapter.Fill(MyTableDataTable);
... // other code
In a single user app, would you typically just do this once when the app starts, instantiate a DataTable object for each table and then store a ref to it so you ever just use that single object which is already filled with data? This way you would ever only read the data from the db once instead of potentially multiple times. Or is the overhead of this so small that it just doesn't matter (plus could be counterproductive with large tables)?
For CE, it's probably a non issue. If you were pushing this app to thousands of users and they were all hitting a centralized DB, you might want to spend some time on optimization. In a single-user instance DB like CE, unless you've got data that says you need to optimize, I wouldn't spend any time worrying about it. Premature optimization, etc.
The way to decide varys between 2 main few things
1. Is the data going to be accesses constantly
2. Is there a lot of data
If you are constanty using the data in the tables, then load them on first use.
If you only occasionally use the data, fill the table when you need it and then discard it.
For example, if you have 10 gui screens and only use myTableDataTable on 1 of them, read it in only on that screen.
The choice really doesn't depend on C# itself. It comes down to a balance between:
How often do you use the data in your code?
Does the data ever change (and do you care if it does)?
What's the relative (time) cost of getting the data again, compared to everything else your code does?
How much value do you put on performance, versus developer effort/time (for this particular application)?
As a general rule: for production applications, where the data doesn't change often, I would probably create the DataTable once and then hold onto the reference as you mention. I would also consider putting the data in a typed collection/list/dictionary, instead of the generic DataTable class, if nothing else because it's easier to let the compiler catch my typing mistakes.
For a simple utility you run for yourself that "starts, does its thing and ends", it's probably not worth the effort.
You are asking about Windows CE. In that particular care, I would most likely do the query only once and hold onto the results. Mobile OSs have extra constraints in batteries and space that desktop software doesn't have. Basically, a mobile OS makes bullet #4 much more important.
Everytime you add another retrieval call from SQL, you make calls to external libraries more often, which means you are probably running longer, allocating and releasing more memory more often (which adds fragmentation), and possibly causing the database to be re-read from Flash memory. it's most likely a lot better to hold onto the data once you have it, assuming that you can (see bullet #2).
It's easier to figure out the answer to this question when you think about datasets as being a "session" of data. You fill the datasets; you work with them; and then you put the data back or discard it when you're done. So you need to ask questions like this:
How current does the data need to be? Do you always need to have the very very latest, or will the database not change that frequently?
What are you using the data for? If you're just using it for reports, then you can easily fill a dataset, run your report, then throw the dataset away, and next time just make a new one. That'll give you more current data anyway.
Just how much data are we talking about? You've said you're working with a relatively small dataset, so there's not a major memory impact if you load it all in memory and hold it there forever.
Since you say it's a single-user app without a lot of data, I think you're safe loading everything in at the beginning, using it in your datasets, and then updating on close.
The main thing you need to be concerned with in this scenario is: What if the app exits abnormally, due to a crash, power outage, etc.? Will the user lose all his work? But as it happens, datasets are extremely easy to serialize, so you can fairly easily implement a "save every so often" procedure to serialize the dataset contents to disk so the user won't lose a lot of work.