I'm writing an app, which is essentially a bunch of loose xaml screens - no codebehind, just dynamically linked to a ViewModel at runtime.
When running this over a weekend on an older pc, there was a crash. Tracing and recreating showed there was a memory leak in igdumd32.dll (An intel graphics driver dll). After a bit of investigation I wrote 2 simple standalone apps with a very simple animation in centre screen. 1 with no effects and 1 with a dropshadoweffect on the animation - no other changes, literally a 1 line change to the first app (xaml is quite verbose, otherwise I'd post it here). I ran these through redgate's memory profiler tool for 40 minutes. The 1st one was fine: but the 2nd one had a notable memory leak on igdumd32.dll and memory allocated by managed code:
Another thing I noticed is that this doesn't happen on a new pc. Looking at the versions of igdumd32.dll - the older pc has a 2009 version (8.15.10.1930) whereas the newer (working) pc has the 2012 version (8.15.10.2639).
Has anyone else experienced this? My thoughts are to only use special effects in xaml when the chipsets/drivers can handle this, but I can't find anything on the web or on MSDN that tells me hardware or driver limitations for these effects (beyond telling me that Hardware Acceleration is required for them or my CPU will bump up).
Your DropShadow and Blur effects in the earlier iterations of WPF were implemented in software (within WPF itself, that is) and would probably not have that problem of leaking memory. Later (4.0 and up) changed the syntax slightly and added the ability to off-load these effects to the graphics hardware. While that does enhance the execution-speed, it also becomes dependent upon the graphics driver to avoid leaking memory. You can change your code to implement these in WPF itself, or as you already have -- provide a hard-coded look-see to the graphics driver.
Related
I've got an application that:
Targets C# 6
Targets .net 4.5.2
Is a Windows Forms application
Builds in AnyCPU Mode beacuse it...
Utilizes old 32 bit libraries that cannot be upgraded to 64 bit, unmanaged memory
Uses DevExpress, a third party control vendor
Processes many gigabytes of data daily to produce reports
After a few hours of use in jobs that have many plots, the application eventually runs out of memory. I've spend quite a long time cleaning up many leaks found in the code and have gotten the project to a state where at the worst case it may be using upwards 400,000K of memory at any given time, according to performance counters. Processing this data has not yielded any issues at this point since data is processed in Jagged arrays, preventing any issues with the Large Object Heap.
Last time this happened the user was using ~305,000K of memory. The application is so "out of memory" that the error dialog cannot even draw the error icon in the MessageBox that comes up, the space where the icon would usually be is all black.
So far I've done the following to clean this up:
Windows forms utilize the Disposed event to ensure that resources are cleaned up, dispose is called manually when required
Business objects utilize IDisposable to remove references
Verified cleanup using ANTS memory profiler and SciTech memory profiler.
The low memory usage suggests this is not the case but I wanted to see if I saw anything that could be helpful, I could not
Utilized the GCSettings.LargeObjectHeapCompactionMode property to remove any fragmentation from processing data that may be fragmented in the Large Object Heap (LoH)
Nearly every article that I've used to get to this point suggests that out of memory actually means out of contiguous address space and given the amount that's in use, I agree with this. I'm not sure what to do at this point since from what I understand (and am probably very wrong about) is that the garbage collector clears this up to make room as the process moves along, with the exception of the LoH, which is cleaned up manually now using the new LargeObejctHeapCompactionMode property introduced in .net 4.5.1.
What am I missing here? I cannot build to 64 bit due to the old 32 bit libraries that contain proprietary algorithms that we do not have access to even dream of producing a 64 bit version of. Are there any modes in these profiles I should be using to identify exactly what is growing out of control here?
If this address space cannot be cleared up does this mean that all c# applications will eventually run "out of memory" because of this?
Nearly every article that I've used to get to this point suggests that out of memory actually means out of contiguous address space and given the amount that's in use, I agree with this.
This is a reasonable hypothesis, but even reasonable hypotheses can be wrong. Yours probably is wrong. What should you do?
Test it with science. That is, look for evidence that falsifies your hypothesis. You want to assume that it is anything else, and be forced by the evidence you've gathered that your hypothesis is not false.
So:
at the point where your application runs out of memory, is it actually out of contiguous free pages of the necessary size? It sure sounds like your observations do not indicate that this is true, so the hypothesis is probably false.
What is other evidence that the hypothesis might be false?
"After a few hours of use in jobs that have many plots, the application eventually runs out of memory."
"Uses DevExpress, a third party control vendor"
"the error dialog cannot even draw the error icon in the MessageBox"
None of this sounds like an out of memory problem. This sounds like a third party control library leaking OS handles for graphics objects. Unfortunately, such leaks usually surface as "out of memory" errors and not "out of handles" errors.
So, that's a new hypothesis. Look for evidence for and against this hypothesis too. You're doing a good job by using a memory profiler. Use a handle profiler next.
If this address space cannot be cleared up does this mean that all c# applications will eventually run "out of memory" because of this?
Nope. The GC does a good job of cleaning up managed memory; lots of applications have no problem running forever without leaking.
I am working on a C# WPF application which uses pixel data from many images to process one image.
It stores every image as System.Drawing.Bitmap and are locked into memory.
The user is able to open any number of images.
The question is that what should normally happen when the user opens so many images, that the memory will be full during processing?
On my Windows 8.1 computer, when this happens, I see in task manager that the memory usage is getting higher, it slows down, and freezes for a minute, then the application exits.
However, on my Windows 8.1 (non-RT) tablet, when this happens, I see in task manager that the memory usage is getting higher then suddenly gets low and then getting higher again and so on for 2-3 times... (this is very strange for me because I think all images should be kept in memory and only released from memory when no longer needed), the speed is normal, no freeze, and AccessViolationException occurs.
So I would like to know if these behaviors are normal or not, and if not what is the normal behavior and why is it not normal for me?
C# is not a good language for memory hungry applications. So like a suggession I would say:
Validate do you really need all images in memory contemporary to process or you need only 2, or some part of them in memory, contemporary.
if the answer is yes, you may look for MemoryMapped files.
if the answer is no, rearchitect your code.
To answer your question: no it's not a normal behaviour and the only right way to deal with memory consumption that leads to some application undefined behavior, is to fix the architecture of application.
This question already has answers here:
.NET WinForm Memory Consumption
(3 answers)
Closed 9 years ago.
Today I've discovered by myself a strange thing experimenting things about the memory consumption, I can't find any documentation of this anywhere, but sure all developer experts knows about of what I want to talk here.
The thing is... When you compile a default WinForms in VB or also C#, when you move the mouse over the form, that action causes to increase the memory consumption to about 8-16 kb per second...
The most important thing is that memory will never be collected/freed!
So the longer you move the mouse over the form, the more RAM consumption will be generated and never will come down, thereby possibly causing an StackOverFlow error, and that's the main reason of my preoccupation...
I have a WinForms application where it needs to stay running over hours and the mouse need to be moved over the app from one point to another each second, so I need to perform a way to avoid this strange memory consumption problem which can produce itself an stackoverflow error.
I've tested the same thing in a Java application and the thing goes dramatical!, if you move the mouse over an empty window then you can see how the memory consumption increases MB0s per second! ...instead the few KB's per second like in VB/C#, and like in VB/C# Form that memory never goes down, there's no way back, it's True what the people says about Java and the memory consumption of that language...I think it sucks.
Then to make the same test on another language I've chosen C++ 'cause is the other one important, I don't have any C/C++ IDE to compile so what I did is to choose some official programs I have made in C/C++ like for example "Winamp" and this time the result is... DOES NOT HAPPENS ANYTHING WHEN MOVING THE MOUSE OVER THE C/C++ APPS! The memory consumption does not increase, absolutely Zero increase.
I've made this experiment with a default Windows Forms application (Empty Form1.vb Class), in C# and in VB, but I work only with VB. I've used .Net Framework 4.0 and 4.5. In Windows 8 x64.
Some expert developer can help me to understand all of this paranormal things?
· Why the memory goes up between 8-16 kb each second when moving the mouse in a VB/C# WinForm.
· Why that increase of memory never goes down again?
· Why the same problem does not happens in C/C++ apps?
(I can understand C++ does not have the same engine (Framework) but anyways... I don't know if that's the reason.)
And the most important question...
· I can prevent that memory increase when moving the mouse over the form?, maybe overriding some native methods or...I don't know...exist a way to avoid it?
UPDATE:
The way how I've measured the memory consumption is just simply as seeying the memory in TaskManager.exe
The reason why I said "The memory never be collected" is because when moving the mouse over the form, the memory counter does not go down on taskmanager, never.
UPDATE 2
I uploaded a video explaining the problem, you can see it with your own eyes! ...I'm not crazy.
http://www.youtube.com/watch?v=sBxicL_x9HQ&feature=youtu.be
Why the memory goes up between 8-16 kb each second when moving the mouse in a VB/C# WinForm.
There are messages that trigger for handling mouse movement, etc, which get processed by the form.
Why that increase of memory never goes down again?
It will. Eventually, you'll see your memory settle down. In C# and VB.Net, the garbage collector doesn't immediately clean up memory (by design), but lets it grow, and will clean up as needed. In general, you'll tend to see .NET applications grow in their memory usage, then drop dramatically, then grow again, then drop, etc. If you have a lot of memory in your system, the "drops" happen infrequently, since a garbage collection is expensive, and there's absolutely no disadvantage to using memory that's not needed elsewhere.
I have a complex project using SilverLight Toolkit's ListBoxDragDropTarget for drag-drop operations and it is maxing CPU. I tried to reproduce the issue in a small sample project, but then it works fine. The problem persists when I remove our custom styles and all other controls from the page, but the page is hosted in another page's ScrollView.
"EnableRedrawRegions" shows that the screen gets redrawn on every frame. My question is this: How can I track down the cause of this constant redrawing?
I have used XPerf to help track down performance issues related to redrawing in Silverlight. It is not completely straightforward or an easy process, but it can help point you in the right direction to where your problems are.
I started with a great tutorial by Seema about using the XPerf command-line tool to profile CPU usage for a Silverlight app. You can basically load up your app, start sampling with XPerf, perform your CPU intensive operations, and then stop sampling and analyze the profile XPerf generates. When you look at the XPerf charts you can select can filter by some process (such as iexplorer or your browser) to see the total % CPU. You can then select a specific length of time in the profile and drill down to see what functions from which DLLs are taking the most CPU cycles. If you point XPerf to Microsoft's symbol server you should get the specific names of the functions where the app is spending most of its time.
For a Silverlight app it's most important to look at what's going on in agcore.dll, npctrl.dll, and coreclr.dll. If your performance problems are related to redrawing, most of the CPU time is likely spent in agcore.dll since that does most of the graphics related work for Silverlight. You can then drill into that and see the specific functions in agcore.dll that are getting called most often during your sample time.
I understand it is kind of an annoying way to debug since you can only really see what is going on in the core Silverlight functions, but it may be able to help you figure out what is going on. In my case I was able to see that most of the time was spent calculating drop-shadows in agcore.dll. I was then able to figure out I stupidly had some content within a drop-shadow effect that was changing many times a second and causing constant recalculation/redraws of the entire drop-shadow effect.
Once you identify your redrawing issues you might want to look into GPU Acceleration with BitmapCaching if you haven't already. That will help offload some of the redrawing to the GPU and save you some CPU cycles.
In my Mono (C#) project that is meant to be cross-platform, I am using the GTK for the UI. However one thing I noticed is, on my netbook in Archlinux, the performance is really speedy, so events such as mouse hover, and redrawing of widgets, etc, are really fast.
Compared to windows (7) on dual core CPUs, the performance is really really weak. Which perplexes me.
Am I doing something wrong that is warranting this difference in performance between OSes?
What are some ways I can do to optimize GTK on Windows? Its really bad to take around 0.5 secs for a hover event to kick in whereas its almost immediate on a weak(er) netbook with Linux.
My code is here for the GUI layer: http://code.google.com/p/subsynct/source/browse/branches/dev/subsync#subsync/GUI
Thanks!
The real problem is with the Graphics Library GTK uses. Cairo. You are right in saying that GTK performs a lot better on Linux and other Operating Systems as compared to Windows. That suggests that in fact the problem isn't actually with the entire Cairo Library. It is in the Win32 backend of Cairo. According to the Backend-Info in Cairo Docs; Cairo uses xlib and in some cases cairo-gl (think customized OpenGL) to work with on Linux and other platforms. While on Windows it uses Win32 GDI which, after all is a bit slow and outdated (not to mention completely software rendered).
Still, even this doesn't account completely for the poor performance of Gtk on Windows. Another problem may be that instead of using native Widgets, Gtk prefers to draw it's own widgets which look the almost the same on all platforms. However on Windows it also tries to emulate the native widgets using LibWimp to further increase native look and feel. This extra Windows-only step may also account for performance overhead. To see this for youself, try deleting (or renaming) libwimp.dll in the GIMP directory. GIMP runs a lot faster after that (though looked a little non-native).
There are also other smaller factors that may or may not affect Gtk's performance on Windows, like the fact that GTK has an extra runtime with like 12-15 extra dll's compared to other toolkits which have like 1-2. Dynamically Linking the entire Gtk Runtime may greatly increase startup time. There is also the fact that Gtk uses a lot of other libraries like Glib , Pango , and of course , Cairo . Writing glue code for these libraries also adds a lot of overhead , and sometimes even an extra library like Gdk .
To optimise Gtk you may try changing the backend of Cairo (difficult , unreccomended and requires another ton of glue code) or stop using libWimp (this will make Gtk look less native). But overall I think GTK is not that slow. I've never personally needed to use any optimizations. Even though I used WinApi in the past too.
I would guess that the performance problems are in Cairo. I suggest you use gtkparasite in Linux to see where and when parts of your app are being redrawn and optimize that.
You could also use the free CLR Profiler from MS on Windows to find the hotspots in your app.