We are currently reworking a WindowsForms application in WPF.
The software is quite large and it will take years to finish it, so we have a hybrid system that displays pure WPF for the new panels and Hosted WindowsForms elements in WindowsFormsHosts controls.
We use Syncfusion's WPF Docking Manager to show these pages in tabs (not sure that this info is relevant).
We spent quite a lot of time to track down memory leaks (using JetBrains' DotMemory), but we run out of memory after opening and closing nearly 100 pages containing WindowsFormsHosts.
This memory leak is quite strange, as you can see in the memory profiling, it seems that the problem lies in the unmanaged memory.
DotMemory profiling
The WindowsFormsHosts seem to be correctly disposed as well as the Child content.
As suggested here WPF WindowsFormsHost memory leak, we wrap the WindowsFormsHosts in a grid that we clear when we want to dispose it:
public override void Dispose()
{
if (this.Content is Grid grid && grid.Children.Count == 1)
{
if (grid.Children[0] is KWFHost wfh)
{
wfh.Child.SizeChanged -= ControlSizeChanged;
wfh.Dispose();
}
grid.Children.Clear();
}
base.Dispose();
}
and
public class KWFHost : WindowsFormsHost
{
protected override void Dispose(bool disposing)
{
if (this.Child is IDisposable disposable)
{
disposable.Dispose();
}
this.Child = null;
base.Dispose(true);
}
}
We suspect that the Hosting causes the leak because in DotMemory, in memory allocation we can see this:
memory allocation
Is there any known issue with the WindowsFormsHosts that could explain this? Or a way for us to isolate the source of the problem?
Edit : Here is the code that adds the Grid and WindowsFormHost :
public void SetContent(System.Windows.Forms.Control control)
{
var host = new KWFHost();
host.Child = control;
control.SizeChanged += ControlSizeChanged;
var grid = new Grid();
grid.Children.Add(host);
this.Content = grid;
}
I finally figured it out:
After a bit more digging, I found out that the WindowsFormsHosts are kept alive.
DotMemory gives me two reasons for that.
the first one: System.Windows.Interop.HwndSourceKeyboardInputSite. For a reason that I can't explain, there is still a reference to the WindowsFormsHost in the ChildKeyboardInputSinks in the HwndSource. If someone knows why, I'd be interested in hearing the reason.
I don't know if there is a better way to get rid of this reference, but here's the code I wrote:
private void CleanHwnd(KWFHost wfh) //my implementation of WindowsFormsHost
{
HwndSource hwndSource = PresentationSource.FromVisual(wfh) as HwndSource;
//here I isolate the right element to remove based on the information I have
IKeyboardInputSink elementToRemove = hwndSource.ChildKeyboardInputSinks.FirstOrDefault(c => c is KWFHost h && h.Parent is Grid g && g.Parent is KWindowsFormsHost fh && Equals(fh.TabName, TabName));
//The method CriticalUnregisterKeyboardInputSink takes care of removing the reference but as it's protected I use reflection to get it.
var mi = typeof(HwndSource).GetMethod("CriticalUnregisterKeyboardInputSink",
BindingFlags.NonPublic | BindingFlags.Instance);
//here I remove the reference
mi.Invoke(hwndSource, new object[] { elementToRemove.KeyboardInputSite });
}
I execute this method just before disposing the WindowsFormsHost.
The second reason was RootSourceRelationManager.
This one gave me a headache. So I did a "Full" dotmemory profiling (before it was a sampled one).
The result was strange, the memory leak disappeared.
The full profiling needs to be executed from Dotmemory and not from Visual Studio. That was the only difference.
After some research I found out that Microsoft.Visual.Design Tools.WpfTap.wpf was involved.
So it seems (I can't be sure) that executing from visual studio causes a memory leak (which is not that serious but a good thing to know).
In the end, after releasing a new test version of the software with the clean method, I no longer have a memory leak.
Related
I was wondering if there is a more efficient way of opening a fresh window in WPF than how presented in code below :
WindowConfigureDatabase windowConfigureDatabse;
private void ButtonConfigureDatabase_Click(object sender, RibbonControlEventArgs e)
{
if (windowConfigureDatabase == null)
{
windowConfigureDatabase = new WindowConfigureDatabase();
}
windowConfigureDatabase.Clear();
windowConfigureDatabase.Show();
windowConfigureDatabase.WindowState = WindowState.Normal;
}
Where windowConfigureDatabase is the new window I want to open. windowConfigureDatabase.Clear(); just resets all the values to default - there aren't many of them to reset. I was wondering whether or not this is the proper way of opening new windows in wpf. The other path I was thinking of was just simply creating a new window on each button click (that way I don't have to clear values each time...) but I'm afraid of allocating too much memory if a user opens the window and closes it a lot of times as I'm not quite sure if garbage collector picks the window up on OnClose event.
So basically my question is - does the garbage collector pick my windows up after I close them during Closing/Closed event? If not, what would be the proper way of managing the window's memory manually? Would adding a
windowConfigureDatabase = null
on Closed/OnClosing event do well?
does the garbage collector pick my windows up after I close them
during Closing/Closed event?
Yes, if unreachable. Read up on this for a better idea.
Would adding a
windowConfigureDatabase = null
on Closed/OnClosing event do well?
Yes. Failing to do this will prevent the window from being garbage collected until windowConfigureDatabase is overwritten or the object containing it is collected.
The memory used by a window depends on its dimensions and how much memory it allocates to do what it needs to do. You generally don't need to worry about this unless you're creating tons of windows(~30+) and/or large volumes of data.
The fastest way to allocate is to allocate up front(ideally at startup) and reuse when possible. Fortunately with windows this is relatively easy. The idea is to hide instead of close, and only close when truly no longer needed.
Like this:
// In each window class or as a base class:
private bool isClosable = false;
protected override void OnClosing(CancelEventArgs args)
{
// Prevent closing until allowed.
if (!isClosable) {
args.Cancel = true;
Hide();
}
base.OnClosing(args);
}
// Call this when you want to destroy the window like normal.
public void ForceClose()
{
isClosable = true;
Close();
}
I am experiencing Flex form black out issue in my project [Desktop App C# & Flex Forms] after opening multiple child windows with flex forms. While investigating this issue using memory profilier I found this is happening due to memory leaks. While closing child windows memory is not getting released completely even after waiting for few mins.
To narrow down the issue I have created a very simple light weight desktop app which launches child winform with blank Flex from the parent form button click. While closing child window I am properly disposing objects (axshockwaveFlashPlayer and proxy) and handles then also I see the memory keeps increasing for each child window and remain constant even after closing all the child windows. I have also tried to dispose objects at flex end while closing child window but no luck.
As per the DebugDiag profiler analysis report, \Flash32_22_0_0_209.ocx is responsible for 138.44 Mbytes with 95% leak probability.
However, I don’t find any option to release memory while closing a child form.
Can anyone suggest what else can be done to resolve this issue.
I am using Flash player version 22.0.0.209. Appreciate your help here.
As per VMMAP Tool memory analysis
If i open 10 child windows memory increase from 400 to 700MB and when i close all the child windows it comes down to only 600 MB and it reamins constant.
Please note committed memory always comes down to where it started.
Only concern is with "Private Data" which never comes down. I am not sure what is Private Data and what we can do from code to release that memory.
Please find below code for your reference
//Child form code start
public Childwindow1(string flexURLPath)
{
InitializeComponent();
this.Closing += Childwindow1_Closing;
host = new WindowsFormsHost();
player = new FlashAxControl();
_flexURL = flexURLPath;
host.Child = player;
this.grdChildWindow1.Children.Add(host);
player.LoadMovie(_flexURL); //call load movie method of user control to load
}
//While closing disposing objects
void Childwindow1_Closing(object sender, System.ComponentModel.CancelEventArgs e)
{
host.Dispose();
player.Dispose();
host = null;
player = null;
}
//child form code ends here
//User Control code
//User Control - LoadMovie Method
public void LoadMovie(string strPath)
{
axShockwaveFlash.LoadMovie(0, strPath);
}
//user control - Disposing object - AXshockwave and calling GC
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
components = null;
}
if(disposing && (axShockwaveFlash != null))
{
Proxy.ExternalInterfaceCall -= ProxyExternalInterfaceCall;
Proxy.Dispose();
Proxy = null;
axShockwaveFlash.Dispose();
axShockwaveFlash = null;
}
base.Dispose(disposing);
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
GC.WaitForPendingFinalizers();
}
I'm running into an issue when using my V8Engine instance, it appears to have a small memory leak, and disposing of it, as well as forcing the garbage collection doesn't seem to help much. It will eventually throw an AccessViolationException on V8Enging local_m_negine = new V8Engine() claiming a Fatal error in heap setup, Allocation failed - process out of memory and Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Monitoring the program's memory usage through Task manager whilst running confirms that it is leaking memory, around 1000 KB every couple of seconds I think. I suspect it is the variables being declared within the executed script not being collected, or something to do with the GlobalObject.SetProperty method. Calling V8Engine.ForceV8GarbageCollection(), V8Engine.Dispose() and even GC.WaitForPendingFinalizers() & GC.Collect() doesn't prevent this memory being leaked (Although it is worth noting that it seems to leak it slower with these commands in place, and I know I shouldn't use GC but it was there as a last resort to see if it would fix the issue.)
A tangential issue that could also provide a solution is the inability to clear the execution context for V8Engine. I am required to dispose and re-instantiate the engine for each script, which I believe is where the memory leak is happening, otherwise I run into issues where variables have already been declared, causing V8Engine.Execute() to throw an exception saying such.
I can definitely confirm that the memory leak is something to do with the V8Engine Implementation, as running the older version of this program that uses Microsoft.JScript has no such memory leak, and the memory used remains consistent.
The affected code is as follows;
//Create the V8Engine and dispose when done
using (V8Engine local_m_engine = new V8Engine())
{
//Set the Lookup instance as a global object so that the JS code in the V8.Net wrapper can access it
local_m_engine.GlobalObject.SetProperty("Lookup", m_lookup, null, true, ScriptMemberSecurity.ReadOnly);
//Execute the script
result = local_m_engine.Execute(script);
//Please just clear everything I can't cope.
local_m_engine.ForceV8GarbageCollection();
local_m_engine.GlobalObject.Dispose();
}
EDIT:
Not sure how useful this will be but I've been running some memory profiling tools on it and have learnt that after running an isolated version of the original code, My software ends up with a large amount of instances of IndexedObjectList's full of null values (see here: http://imgur.com/a/bll5K). It appears to have one instance of each class for each V8Engine instance that is made, but they aren't being disposed or freed. I cant help but feel like I'm missing a command or something here.
The code I'm using to test and recreate the memory leak that the above implementation causes is as follows:
using System;
using V8.Net;
namespace V8DotNetMemoryTest
{
class Program
{
static void Main(string[] args)
{
string script = #" var math1 = 5;
var math2 = 10;
result = 5 + 10;";
Handle result;
int i = 0;
V8Engine local_m_engine;
while (true)
{
//Create the V8Engine and dispose when done
local_m_engine = new V8Engine();
//Set the Lookup instance as a global object so that the JS code in the V8.Net wrapper can access it
//local_m_engine.GlobalObject.SetProperty("Lookup", m_lookup, null, true, ScriptMemberSecurity.ReadOnly);
//Execute the script
result = local_m_engine.Execute(script);
Console.WriteLine(i++);
result.ReleaseManagedObject();
result.Dispose();
local_m_engine.Dispose();
GC.WaitForPendingFinalizers();
GC.Collect();
local_m_engine = null;
}
}
}
}
Sorry, I had no idea this question existed. Make sure to use the v8.net tag.
Your problem is this line:
result = local_m_engine.Execute(script);
The result returned is never disposed. ;) You are responsible for returned handles. Those handles are struct values, not class objects.
You could also do using (result = local_m_engine.Execute(script)) { ... }
There is a new version released. I am finally resurrecting this project again as I will need it for the FlowScript VPL project - and it now supports .Net Standard as well for cross-platform support!
I am working on a .NET 3.5 application which uses SharpDX to render tiled 2D images.
Textures (Texture2D) are loaded into a cache on-demand, and are created in the managed pool.
Textures are disposed of when no longer required, and I have verified that Dispose() is called correctly. SharpDX object tracking indicates that there are no textures being finalized.
The issue is that large amounts of unmanaged heap memory used by the textures continues to be reserved after disposal. This memory is reused when loading a new texture, so memory is not being leaked.
However, another part of the application also requires significant chunks of memory to process new images. Because these heaps are still present, even though the textures have been disposed, there is not enough contiguous memory to load another image (can be hundreds of MB).
If I allocate unmanaged meory using AllocHGlobal, the resulting
heap memory completely vanishes again after calling FreeHGlobal.
VMMap shows the unmanaged heap (red) after heavy usage of the application.
We can see here that unmanaged heap accounts for ~380MB, even though only ~20MB is actually committed at this point.
Long term, the application is being ported to 64-bit. However, this is not trivial due to unmanaged dependencies. Also, not all users are on 64-bit machines.
EDIT: I've put together a demonstration of the issue - create a WinForms application and install SharpDX 2.6.3 via Nuget.
Form1.cs:
using System.Collections.Generic;
using System.Diagnostics;
using System.Windows.Forms;
using SharpDX.Direct3D9;
namespace SharpDXRepro {
public partial class Form1 : Form {
private readonly SharpDXRenderer renderer;
private readonly List<Texture> textures = new List<Texture>();
public Form1() {
InitializeComponent();
renderer = new SharpDXRenderer(this);
Debugger.Break(); // Check VMMap here
LoadTextures();
Debugger.Break(); // Check VMMap here
DisposeAllTextures();
Debugger.Break(); // Check VMMap here
renderer.Dispose();
Debugger.Break(); // Check VMMap here
}
private void LoadTextures() {
for (int i = 0; i < 1000; i++) {
textures.Add(renderer.LoadTextureFromFile(#"D:\Image256x256.jpg"));
}
}
private void DisposeAllTextures() {
foreach (var texture in textures.ToArray()) {
texture.Dispose();
textures.Remove(texture);
}
}
}
}
SharpDXRenderer.cs:
using System;
using System.Linq;
using System.Windows.Forms;
using SharpDX.Direct3D9;
namespace SharpDXRepro {
public class SharpDXRenderer : IDisposable {
private readonly Control parentControl;
private Direct3D direct3d;
private Device device;
private DeviceType deviceType = DeviceType.Hardware;
private PresentParameters presentParameters;
private CreateFlags createFlags = CreateFlags.HardwareVertexProcessing | CreateFlags.Multithreaded;
public SharpDXRenderer(Control parentControl) {
this.parentControl = parentControl;
InitialiseDevice();
}
public void InitialiseDevice() {
direct3d = new Direct3D();
AdapterInformation defaultAdapter = direct3d.Adapters.First();
presentParameters = new PresentParameters {
Windowed = true,
EnableAutoDepthStencil = true,
AutoDepthStencilFormat = Format.D16,
SwapEffect = SwapEffect.Discard,
PresentationInterval = PresentInterval.One,
BackBufferWidth = parentControl.ClientSize.Width,
BackBufferHeight = parentControl.ClientSize.Height,
BackBufferCount = 1,
BackBufferFormat = defaultAdapter.CurrentDisplayMode.Format,
};
device = new Device(direct3d, direct3d.Adapters[0].Adapter, deviceType,
parentControl.Handle, createFlags, presentParameters);
}
public Texture LoadTextureFromFile(string filename) {
using (var stream = new FileStream(filename, FileMode.Open, FileAccess.Read)) {
return Texture.FromStream(device, stream, 0, 0, 1, Usage.None, Format.Unknown, Pool.Managed, Filter.Point, Filter.None, 0);
}
}
public void Dispose() {
if (device != null) {
device.Dispose();
device = null;
}
if (direct3d != null) {
direct3d.Dispose();
direct3d = null;
}
}
}
}
My question therefore is - (how) can I reclaim the memory consumed by these unmanaged heaps after the textures have been disposed?
It seems the memory that's giving the issues is allocated by the nVidia driver. As far as I can tell, all the deallocation methods are properly called, so this might be a bug in the drivers. Looking around the internet shows some issues that seem related to this, though it's nothing serious enough to be worth referencing. I can't test this on an ATi card (I haven't seen one in like ten years :D).
So it seems like your options are:
Make sure your textures are big enough to never be allocated on the "shared" heaps. This allows the memory leak to proceed much slower - although it's still unreleased memory, it's not going to cause memory fragmentation anywhere near as serious as you're experiencing. You're talking about drawing tiles - this has historically been done with tilesets, which give you a lot better handling (though they also have drawbacks). In my tests, simply avoiding tiny textures all but eliminated the problem - it's hard to tell if it's just hidden or completely gone (both are quite possible).
Handle your processing in a separate process. Your main application would launch the other process whenever needed, and the memory will be properly reclaimed when the helper process exits. Of course, this only makes sense if you're writing some processing application - if you're making something that actually displays the textures, this isn't going to help (or at least it's going to be really tricky to setup).
Do not dispose of the textures. Managed texture pool handles paging the texture to and from the device for you, and it even allows you to use priorities etc., as well as flushing the whole on-device (managed) memory. This means that the textures will remain in your process memory, but it seems that you'll still get better memory usage than with your current approach :)
Possibly, the problems might be related to e.g. DirectX 9 contexts only. You might want to test with one of the newer interfaces, like DX10 or DXGI. This doesn't necessarily limit you to DX10+ GPUs - but you will lose support for Windows XP (which isn't supported anymore anyway).
I am using Visual Studio 2013 to create a WPF Desktop application that have some report generation functionalities, I have about 30 report and the user can swich from a report to another. My problem is that each time I change ReportEmbeddedResource and then call the RefreshReport() methods, the memory increases, so if the user navigate through all the 30 report my app will consume about 130 Mb! I know that I have to release the Resources after each navigation, I googled about that but didn't find an answer; Here is my code
public MainWindow() // constructor
{
InitializeComponent();
this.reportViewer.ZoomMode = Microsoft.Reporting.WinForms.ZoomMode.PageWidth;
InitDataSources();
}
private void InitDataSources()
{
//manager data source
mangerDataSource = new ReportDataSource();
mangerDataSource.Name = "ManagerDataSet";
mangerDataSource.Value = uow.Members.GetAll().
ToList().Where((s) => s.MemberType == MemmberTypes.Manager);
reportViewer.LocalReport.DataSources.Add(mangerDataSource);
//adding 2 other data sources
}
public void RenderReport(string reportKey)
{
reportViewer.Reset();
string path = "Manager";
if (reportKey.Contains("tea")) path = "Teacher";
if (reportKey.Contains("stu")) path = "Student";
reportViewer.LocalReport.ReportEmbeddedResource = string.Format(
"Printers.Reports.{0}.{1}.rdlc", path,reportKey);
reportViewer.RefreshReport();
}
Is there a way to release the old report resource after Rendering a new report?
I don't have much experience with this but it seems that the best thing you can do is to use the Safe Handles to get your reports inside a manageable wrapper and then use the Dispose method and force the Garbage Collector to collect, while suppressing the Finalizer. Note that the memory usage you see in the Taskmanager is reserved memory, not actually memory in current use; it is possible that you release the report object and the taskmanager continues to report high memory values on the executable.
reportViewer.Dispose();
GC.SuppressFinalize(reportViewer);
The whole Disposing Method can become quite confusing so take your time and have a look here:
MSDN - Implementing a Dispose Method
MSDN - IDisposable.Dispose Method
I was having the same issue with .NET 4.5 VS 2013
I tried several things, but what finally made it work was:
Compiling the project in x64 and using LocalReport.ReleaseSandBoxAppDomain()
I got part of the solution from here: Very High Memory Usage in .NET 4.0
Problem is solved by MarkJ_KY's comment at https://connect.microsoft.com/VisualStudio/feedback/details/527451/ms-report-viewer-memory-leak-any-update-fix-winforms-application
It might look a little complex but it is not. The idea is to create an AppDomain, do your reporting stuff in that domain and then unload the domain. When unloading all memory are released :-)
I have used that solution which solves the problem for me.