I am working on a .NET 3.5 application which uses SharpDX to render tiled 2D images.
Textures (Texture2D) are loaded into a cache on-demand, and are created in the managed pool.
Textures are disposed of when no longer required, and I have verified that Dispose() is called correctly. SharpDX object tracking indicates that there are no textures being finalized.
The issue is that large amounts of unmanaged heap memory used by the textures continues to be reserved after disposal. This memory is reused when loading a new texture, so memory is not being leaked.
However, another part of the application also requires significant chunks of memory to process new images. Because these heaps are still present, even though the textures have been disposed, there is not enough contiguous memory to load another image (can be hundreds of MB).
If I allocate unmanaged meory using AllocHGlobal, the resulting
heap memory completely vanishes again after calling FreeHGlobal.
VMMap shows the unmanaged heap (red) after heavy usage of the application.
We can see here that unmanaged heap accounts for ~380MB, even though only ~20MB is actually committed at this point.
Long term, the application is being ported to 64-bit. However, this is not trivial due to unmanaged dependencies. Also, not all users are on 64-bit machines.
EDIT: I've put together a demonstration of the issue - create a WinForms application and install SharpDX 2.6.3 via Nuget.
Form1.cs:
using System.Collections.Generic;
using System.Diagnostics;
using System.Windows.Forms;
using SharpDX.Direct3D9;
namespace SharpDXRepro {
public partial class Form1 : Form {
private readonly SharpDXRenderer renderer;
private readonly List<Texture> textures = new List<Texture>();
public Form1() {
InitializeComponent();
renderer = new SharpDXRenderer(this);
Debugger.Break(); // Check VMMap here
LoadTextures();
Debugger.Break(); // Check VMMap here
DisposeAllTextures();
Debugger.Break(); // Check VMMap here
renderer.Dispose();
Debugger.Break(); // Check VMMap here
}
private void LoadTextures() {
for (int i = 0; i < 1000; i++) {
textures.Add(renderer.LoadTextureFromFile(#"D:\Image256x256.jpg"));
}
}
private void DisposeAllTextures() {
foreach (var texture in textures.ToArray()) {
texture.Dispose();
textures.Remove(texture);
}
}
}
}
SharpDXRenderer.cs:
using System;
using System.Linq;
using System.Windows.Forms;
using SharpDX.Direct3D9;
namespace SharpDXRepro {
public class SharpDXRenderer : IDisposable {
private readonly Control parentControl;
private Direct3D direct3d;
private Device device;
private DeviceType deviceType = DeviceType.Hardware;
private PresentParameters presentParameters;
private CreateFlags createFlags = CreateFlags.HardwareVertexProcessing | CreateFlags.Multithreaded;
public SharpDXRenderer(Control parentControl) {
this.parentControl = parentControl;
InitialiseDevice();
}
public void InitialiseDevice() {
direct3d = new Direct3D();
AdapterInformation defaultAdapter = direct3d.Adapters.First();
presentParameters = new PresentParameters {
Windowed = true,
EnableAutoDepthStencil = true,
AutoDepthStencilFormat = Format.D16,
SwapEffect = SwapEffect.Discard,
PresentationInterval = PresentInterval.One,
BackBufferWidth = parentControl.ClientSize.Width,
BackBufferHeight = parentControl.ClientSize.Height,
BackBufferCount = 1,
BackBufferFormat = defaultAdapter.CurrentDisplayMode.Format,
};
device = new Device(direct3d, direct3d.Adapters[0].Adapter, deviceType,
parentControl.Handle, createFlags, presentParameters);
}
public Texture LoadTextureFromFile(string filename) {
using (var stream = new FileStream(filename, FileMode.Open, FileAccess.Read)) {
return Texture.FromStream(device, stream, 0, 0, 1, Usage.None, Format.Unknown, Pool.Managed, Filter.Point, Filter.None, 0);
}
}
public void Dispose() {
if (device != null) {
device.Dispose();
device = null;
}
if (direct3d != null) {
direct3d.Dispose();
direct3d = null;
}
}
}
}
My question therefore is - (how) can I reclaim the memory consumed by these unmanaged heaps after the textures have been disposed?
It seems the memory that's giving the issues is allocated by the nVidia driver. As far as I can tell, all the deallocation methods are properly called, so this might be a bug in the drivers. Looking around the internet shows some issues that seem related to this, though it's nothing serious enough to be worth referencing. I can't test this on an ATi card (I haven't seen one in like ten years :D).
So it seems like your options are:
Make sure your textures are big enough to never be allocated on the "shared" heaps. This allows the memory leak to proceed much slower - although it's still unreleased memory, it's not going to cause memory fragmentation anywhere near as serious as you're experiencing. You're talking about drawing tiles - this has historically been done with tilesets, which give you a lot better handling (though they also have drawbacks). In my tests, simply avoiding tiny textures all but eliminated the problem - it's hard to tell if it's just hidden or completely gone (both are quite possible).
Handle your processing in a separate process. Your main application would launch the other process whenever needed, and the memory will be properly reclaimed when the helper process exits. Of course, this only makes sense if you're writing some processing application - if you're making something that actually displays the textures, this isn't going to help (or at least it's going to be really tricky to setup).
Do not dispose of the textures. Managed texture pool handles paging the texture to and from the device for you, and it even allows you to use priorities etc., as well as flushing the whole on-device (managed) memory. This means that the textures will remain in your process memory, but it seems that you'll still get better memory usage than with your current approach :)
Possibly, the problems might be related to e.g. DirectX 9 contexts only. You might want to test with one of the newer interfaces, like DX10 or DXGI. This doesn't necessarily limit you to DX10+ GPUs - but you will lose support for Windows XP (which isn't supported anymore anyway).
Related
We are currently reworking a WindowsForms application in WPF.
The software is quite large and it will take years to finish it, so we have a hybrid system that displays pure WPF for the new panels and Hosted WindowsForms elements in WindowsFormsHosts controls.
We use Syncfusion's WPF Docking Manager to show these pages in tabs (not sure that this info is relevant).
We spent quite a lot of time to track down memory leaks (using JetBrains' DotMemory), but we run out of memory after opening and closing nearly 100 pages containing WindowsFormsHosts.
This memory leak is quite strange, as you can see in the memory profiling, it seems that the problem lies in the unmanaged memory.
DotMemory profiling
The WindowsFormsHosts seem to be correctly disposed as well as the Child content.
As suggested here WPF WindowsFormsHost memory leak, we wrap the WindowsFormsHosts in a grid that we clear when we want to dispose it:
public override void Dispose()
{
if (this.Content is Grid grid && grid.Children.Count == 1)
{
if (grid.Children[0] is KWFHost wfh)
{
wfh.Child.SizeChanged -= ControlSizeChanged;
wfh.Dispose();
}
grid.Children.Clear();
}
base.Dispose();
}
and
public class KWFHost : WindowsFormsHost
{
protected override void Dispose(bool disposing)
{
if (this.Child is IDisposable disposable)
{
disposable.Dispose();
}
this.Child = null;
base.Dispose(true);
}
}
We suspect that the Hosting causes the leak because in DotMemory, in memory allocation we can see this:
memory allocation
Is there any known issue with the WindowsFormsHosts that could explain this? Or a way for us to isolate the source of the problem?
Edit : Here is the code that adds the Grid and WindowsFormHost :
public void SetContent(System.Windows.Forms.Control control)
{
var host = new KWFHost();
host.Child = control;
control.SizeChanged += ControlSizeChanged;
var grid = new Grid();
grid.Children.Add(host);
this.Content = grid;
}
I finally figured it out:
After a bit more digging, I found out that the WindowsFormsHosts are kept alive.
DotMemory gives me two reasons for that.
the first one: System.Windows.Interop.HwndSourceKeyboardInputSite. For a reason that I can't explain, there is still a reference to the WindowsFormsHost in the ChildKeyboardInputSinks in the HwndSource. If someone knows why, I'd be interested in hearing the reason.
I don't know if there is a better way to get rid of this reference, but here's the code I wrote:
private void CleanHwnd(KWFHost wfh) //my implementation of WindowsFormsHost
{
HwndSource hwndSource = PresentationSource.FromVisual(wfh) as HwndSource;
//here I isolate the right element to remove based on the information I have
IKeyboardInputSink elementToRemove = hwndSource.ChildKeyboardInputSinks.FirstOrDefault(c => c is KWFHost h && h.Parent is Grid g && g.Parent is KWindowsFormsHost fh && Equals(fh.TabName, TabName));
//The method CriticalUnregisterKeyboardInputSink takes care of removing the reference but as it's protected I use reflection to get it.
var mi = typeof(HwndSource).GetMethod("CriticalUnregisterKeyboardInputSink",
BindingFlags.NonPublic | BindingFlags.Instance);
//here I remove the reference
mi.Invoke(hwndSource, new object[] { elementToRemove.KeyboardInputSite });
}
I execute this method just before disposing the WindowsFormsHost.
The second reason was RootSourceRelationManager.
This one gave me a headache. So I did a "Full" dotmemory profiling (before it was a sampled one).
The result was strange, the memory leak disappeared.
The full profiling needs to be executed from Dotmemory and not from Visual Studio. That was the only difference.
After some research I found out that Microsoft.Visual.Design Tools.WpfTap.wpf was involved.
So it seems (I can't be sure) that executing from visual studio causes a memory leak (which is not that serious but a good thing to know).
In the end, after releasing a new test version of the software with the clean method, I no longer have a memory leak.
I'm running into an issue when using my V8Engine instance, it appears to have a small memory leak, and disposing of it, as well as forcing the garbage collection doesn't seem to help much. It will eventually throw an AccessViolationException on V8Enging local_m_negine = new V8Engine() claiming a Fatal error in heap setup, Allocation failed - process out of memory and Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Monitoring the program's memory usage through Task manager whilst running confirms that it is leaking memory, around 1000 KB every couple of seconds I think. I suspect it is the variables being declared within the executed script not being collected, or something to do with the GlobalObject.SetProperty method. Calling V8Engine.ForceV8GarbageCollection(), V8Engine.Dispose() and even GC.WaitForPendingFinalizers() & GC.Collect() doesn't prevent this memory being leaked (Although it is worth noting that it seems to leak it slower with these commands in place, and I know I shouldn't use GC but it was there as a last resort to see if it would fix the issue.)
A tangential issue that could also provide a solution is the inability to clear the execution context for V8Engine. I am required to dispose and re-instantiate the engine for each script, which I believe is where the memory leak is happening, otherwise I run into issues where variables have already been declared, causing V8Engine.Execute() to throw an exception saying such.
I can definitely confirm that the memory leak is something to do with the V8Engine Implementation, as running the older version of this program that uses Microsoft.JScript has no such memory leak, and the memory used remains consistent.
The affected code is as follows;
//Create the V8Engine and dispose when done
using (V8Engine local_m_engine = new V8Engine())
{
//Set the Lookup instance as a global object so that the JS code in the V8.Net wrapper can access it
local_m_engine.GlobalObject.SetProperty("Lookup", m_lookup, null, true, ScriptMemberSecurity.ReadOnly);
//Execute the script
result = local_m_engine.Execute(script);
//Please just clear everything I can't cope.
local_m_engine.ForceV8GarbageCollection();
local_m_engine.GlobalObject.Dispose();
}
EDIT:
Not sure how useful this will be but I've been running some memory profiling tools on it and have learnt that after running an isolated version of the original code, My software ends up with a large amount of instances of IndexedObjectList's full of null values (see here: http://imgur.com/a/bll5K). It appears to have one instance of each class for each V8Engine instance that is made, but they aren't being disposed or freed. I cant help but feel like I'm missing a command or something here.
The code I'm using to test and recreate the memory leak that the above implementation causes is as follows:
using System;
using V8.Net;
namespace V8DotNetMemoryTest
{
class Program
{
static void Main(string[] args)
{
string script = #" var math1 = 5;
var math2 = 10;
result = 5 + 10;";
Handle result;
int i = 0;
V8Engine local_m_engine;
while (true)
{
//Create the V8Engine and dispose when done
local_m_engine = new V8Engine();
//Set the Lookup instance as a global object so that the JS code in the V8.Net wrapper can access it
//local_m_engine.GlobalObject.SetProperty("Lookup", m_lookup, null, true, ScriptMemberSecurity.ReadOnly);
//Execute the script
result = local_m_engine.Execute(script);
Console.WriteLine(i++);
result.ReleaseManagedObject();
result.Dispose();
local_m_engine.Dispose();
GC.WaitForPendingFinalizers();
GC.Collect();
local_m_engine = null;
}
}
}
}
Sorry, I had no idea this question existed. Make sure to use the v8.net tag.
Your problem is this line:
result = local_m_engine.Execute(script);
The result returned is never disposed. ;) You are responsible for returned handles. Those handles are struct values, not class objects.
You could also do using (result = local_m_engine.Execute(script)) { ... }
There is a new version released. I am finally resurrecting this project again as I will need it for the FlowScript VPL project - and it now supports .Net Standard as well for cross-platform support!
We have an issue with some of our ASP.Net applications. Some of our apps claim a large amount of memory from start as their working set.
On our 2 webfarm-servers (4gb of RAM each) run multiple applications. We have a stable environment with about 1.2gb of memory free.
Then we add an MVC5 + WebApi v2 + Entity Framework app that instantly claims 1+gb as working set memory, while only actually using about 300mb. This causes other applications to complain that there is not enough memory left.
We already tried setting limit for virtual memory and the limit for private memory, without any avail. If we set this to about 500mb, the app still uses more or less the same amount of memory (way over 500) and does not seem to respect the limits put in place.
For reference, I tested this with an empty MVC5 project (VS2013 template) and this already claims 300mb of memory, while only using about 10mb.
Setting the app as a 32-bit app seems to have some impact in reducing the size of the working set.
Is there any way to reduce the size of the working set, or to enforce a hard limit on the size of it?
Edit:
In the case of the huge memory use for the project using Web Api v2 and Entity Framework, my API controllers look like this:
namespace Foo.Api
{
public class BarController : ApiController
{
private FooContext db = new FooContext();
public IQueryable<Bar> GetBar(string bla)
{
return db.Bar.Where(f => f.Category.Equals(bla)).OrderBy(f => f.Year);
}
}
as they look in most tutorials I could find (including the ones from microsoft). Using using here does not work because of LINQ deferred loading. It could work if I added a ToList (not tested) everywhere, but does this have any other impact?
edit2:
It works if I do
namespace Foo.Api
{
public class BarController : ApiController
{
public List<Bar> GetBar(string bla)
{
using(FooContext db = new FooContext){
return db.Bar.Where(f => f.Category.Equals(bla)).OrderBy(f => f.Year).ToList();
}
}
}
Does the ToList() have any implications on the performance of the api? (I know I can not continue querying cheaply as with an IQueryable)
Edit3:
I notice that its the private working set of the app that is quite high. Is there a way to limit this? (Without causing constant recycles)
Edit4:
As far as I know I have a Dispose on each and every APIController. My front-end is just some simple MVC controllers but for the large part .cshtml and javascript (angular) files.
We have another app, just regular mvc, with two models and some simple views (no database, or other external resources that could be leaked) and this also consumes up to 4-500mb of memory. If I profile it, I can't see anything that indicates memory leaks, I do see that only 10 or 20 mb is actually used, the rest is unmanaged memory that is unassigned (but part of the private memory working set, so claimed by this app and unusable by any other).
I had a similar problem with some of my applications. I was able to solve the problem by properly closing the disposable database resources by wrapping them in using clauses.
For Entity Framework, that would mean to ensure you always close your context after each request. Connections should be disposed between requests.
using (var db = new MyEFContext())
{
// Execute queries here
var query = from u as db.User
where u.UserId = 1234
select u.Name;
// Execute the query.
return query.ToList();
// This bracket will dispose the context properly.
}
You may need to wrap the context into a service that request-caches your context in order to keep it alive throughout the request, and disposes of it when complete.
Or, if using the pattern of having a single context for the entire controller like in the MSDN examples, make sure you override the Dispose(bool) method, like the example here.
protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
So your controller (from above) should look like this:
namespace Foo.Api
{
public class BarController : ApiController
{
private FooContext db = new FooContext();
public IQueryable<Bar> GetBar(string bla)
{
return db.Bar.Where(f => f.Category.Equals(bla)).OrderBy(f => f.Year);
}
// WebApi 2 will call this automatically after each
// request. You need this to ensure your context is disposed
// and the memory it is using is freed when your app does garbage
// collection.
protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
}
}
The behavior I saw was that the application would consume a lot of memory, but it could garbage collect enough memory to keep it from ever getting an OutOfMemoryException. This made it difficult to find the problem, but disposing the database resources solved it. One of the applications used to hover at around 600 MB of RAM usage, and now it hovers around 75 MB.
But this advice doesn't just apply to database connections. Any class that implements IDisposable should be suspect if you are running into memory leaks. But since you mentioned you are using EntityFramework, it is the most likely suspect.
Removing all Telerik Kendo MVC references (dll and such) fixed our problems. If we run the application without, all our memory problems are gone and we see normal memory use.
Basically: it was an external library causing high memory use.
How do I load many large photos from a directory and its sub-directories in such a way as to prevent an OutOfMemoryException?
I have been using:
foreach(string file in files)
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file) };
this.Controls.Add(pic);
}
which has worked until now. The photos that I need to work with now are anywhere between 15 and 40MB's each, and there could be hundreds of them.
You're attacking the garbage collector with this approach. Loading 15-40mb objects in a loop will always invite an OutOfMemoryException. This is because the objects go straight onto the large object heap, all objects > 85K do. Large objects become Gen 2 objects immediately and the memory is not automatically compacted as of .Net 4.5.1 (you request it) and will not be compacted at all in earlier versions.
Therefore even if you get away with initially loading the objects and the app keeps running, there is every chance that these objects, even when dereferenced completely, will hang around, fragmenting the large object heap. Once fragmentation occurs and for example the user closes the control to do something else for a minute or two and opens the control again, it is much more likely all the new objects will not be able to slot in to the LOH - the memory must be contiguous when allocation occurs. The GC runs collections on Gen 2 and LOH much less often for performance reasons - memcpy is used by the GC in the background and this is expensive on larger blocks of memory.
Also, the memory consumed will not be released if you have all of these images referenced from a control that is in use as well, imagine tabs. The whole idea of doing this is misconceived. Use thumbnails or load full scale images as needed by the user and be careful with the memory consumed.
UPDATE
Rather than telling you what you should and should not do I have decided to try to help you do it :)
I wrote a small program that operates on a directory containing 440 jpeg files with a total size of 335 megabytes. When I first ran your code I got the OutOfMemoryException and the form remained unresponsive.
Step 1
The first thing to note is if you are compiling as x86 or AnyCpu you need to change this to x64. Right click project, go to Build tab and set the target platform to x64.
This is because the amount of memory that can be addressed on a 32 bit x86 platform is limited. All .Net processes run within a virtual address space and the CLR heap size will be whatever the process is allowed by the OS and is not really within the control of the developer. However, it will allocate as much memory as is available - I am running on 64 bit Windows 8.1 so changing the target platform gives me an almost unlimited amount of memory space to use - right up to the limit of physical memory your process will be allowed.
After doing this running your code did not cause an OutOfMemoryException
Step 2
I changed the target framework to 4.5.1 from the default 4.5 in VS 2013. I did this so I could use GCSettings.LargeObjectHeapCompactionMode, as it is only available in 4.5.1 . I noticed that closing the form took an age because the GC was doing a crazy amount of work releasing memory. Basically I would set this at the end of the loadPics code as it will allow the large object heap to not get fragmented on the next blocking garbage collection. This will be essential for your app I believe so if possible try to use this version of the framework. You should test it on earlier versions too to see the difference when interacting with your app.
Step 3
As the app was still unresponsive I made the code run asynchronously
Step 4
As the code now runs on a separate thread to the UI thread it caused a GUI cross thread exception when accessing the form, so I had to use Invoke which posts a message back to the UI thread from the code's thread. This is because UI controls can only be accessed from a UI thread.
Code
private async void button1_Click(object sender, EventArgs e)
{
await LoadAllPics();
}
private async Task LoadAllPics()
{
IEnumerable<string> files = Directory.EnumerateFiles(#"C:\Dropbox\Photos", "*.JPG", SearchOption.AllDirectories);
await Task.Run(() =>
{
foreach(string file in files)
{
Invoke((MethodInvoker)(() =>
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file) };
this.Controls.Add(pic);
}));
}
}
);
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
}
You can try resizing the image when you are putting on the UI.
foreach(string file in files)
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file).resizeImage(50,50) };
this.Controls.Add(pic);
}
public static Image resizeImage(this Image imgToResize, Size size)
{
return (Image)(new Bitmap(imgToResize, size));
}
I have a Windows Phone 8 App, which uses Background agent to:
Get data from internet;
Generate image based on a User Control that uses the data from step 1 as data source;
In the User Control I have Grid & StackPanel & some Text and Image controls;
When some of the Images use local resources from the installation folder (/Assets/images/...)
One of them which I used as a background is selected by user from the phone's photo library, so I have to set the source using C# code behind.
However, when it runs under the background, it get the OutOfMemoryException, some troubleshooting so far:
When I run the process in the "front", everything works fine;
If I comment out the update progress, and create the image directly, it also works fine;
If I don't set the background image, it also works fine;
The OutOfMemoryException was thrown out during var bmp = new WriteableBitmap(480, 800);
I already shrink the image size from 1280*768 to 800*480, I think it is the bottom line for a full screen background image, isn't it?
After some research, I found out this problem occurs because it exceeded the 11 MB limitation for a Periodic Task.
I tried use the DeviceStatus.ApplicationCurrentMemoryUsage to track the memory usageļ¼
-- the limitation is 11,534,336 (bit)
-- when background agent started, even without any task in it, the memory usage turns to be 4,648,960
-- When get update from internet, it grew up to 5,079,040
-- when finished, it dropped back to 4,648,960
-- When the invoke started (to generate image from the User Control), it grew up to 8,499,200
Well, I guess that's the problem, there is little memory available for it to render the image via WriteableBitmap.
Any idea how to work out this problem?
Is there a better method to generate an image from a User Control / or anything else?
Actually the original image might only be 100 kb or around, however, when rendering by WriteableBitmap, the file size (as well as the required memory size I guess) might grew up to 1-2MB.
Or can I release the memory from anywhere?
==============================================================
BTW, when this Code Project article says I can use only 11MB memory in a Periodic Task;
However, this MSDN article says that I can use up to 20 MB or 25MB with Windows Phone 8 Update 3;
Which is correct? And why am I in the first situation?
==============================================================
Edit:
Speak of the debugger, it also stated in the MSDN article:
When running under the debugger, memory and timeout restrictions are suspended.
But why would I still hit the limitation?
==============================================================
Edit:
Well, I found something seems to be helpful, I will check on them for now, suggestions are still welcome.
http://writeablebitmapex.codeplex.com/
http://suchan.cz/2012/07/pro-live-tiles-for-windows-phone/
http://notebookheavy.com/2011/12/06/microsoft-style-dynamic-tiles-for-windows-phone-mango/
==============================================================
The code to generate the image:
Deployment.Current.Dispatcher.BeginInvoke(() =>
{
var customBG = new ImageUserControl();
customBG.Measure(new Size(480, 800));
var bmp = new WriteableBitmap(480, 800); //Thrown the **OutOfMemoryException**
bmp.Render(customBG, null);
bmp.Invalidate();
using (var isf = IsolatedStorageFile.GetUserStoreForApplication())
{
filename = "/Shared/NewBackGround.jpg";
using (var stream = isf.OpenFile(filename, System.IO.FileMode.OpenOrCreate))
{
bmp.SaveJpeg(stream, 480, 800, 0, 100);
}
}
}
The XAML code for the ImageUserControl:
<UserControl blabla... d:DesignHeight="800" d:DesignWidth="480">
<Grid x:Name="LayoutRoot">
<Image x:Name="nBackgroundSource" Stretch="UniformToFill"/>
//blabla...
</Grid>
</UserControl>
The C# code behind the ImageUserControl:
public ImageUserControl()
{
InitializeComponent();
LupdateUI();
}
public void LupdateUI()
{
DataInfo _dataInfo = new DataInfo();
LayoutRoot.DataContext = _dataInfo;
try
{
using (var isoStore = IsolatedStorageFile.GetUserStoreForApplication())
{
using (var isoFileStream = isoStore.OpenFile("/Shared/BackgroundImage.jpg", FileMode.Open, FileAccess.Read))
{
BitmapImage bi = new BitmapImage();
bi.SetSource(isoFileStream);
nBackgroundSource.Source = bi;
}
}
}
catch (Exception) { }
}
When DataInfo is another Class within the settings page that hold data get from the internet:
public class DataInfo
{
public string Wind1 { get { return GetValueOrDefault<string>("Wind1", "N/A"); } set { if (AddOrUpdateValue("Wind1", value)) { Save(); } } }
public string Wind2 { get { return GetValueOrDefault<string>("Wind2", "N/A"); } set { if (AddOrUpdateValue("Wind2", value)) { Save(); } } }
//blabla...
}
If I comment out the update progress, and create the image directly, it also works fine I think you should focus on that part. It seems to indicate some memory isn't freed after the update. Make sure all the references used during the update process go out of scope before rendering the picture. Forcing a garbage collection can help too:
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect(); // Frees the memory that was used by the finalizers
Another thing to consider is that the debugger is also using a lot of memory. Do a real test by compiling your project in "Release" mode and deploying on a phone to make sure you're running out of memory.
Still, I have already been in that situation so I know it may not be enough. The point is: some libraries in the .NET Framework are loaded lazily. For instance, if your update process involves downloading some data, then the background agent will load the network libraries. Those libraries can't be unloaded and will waste some of your agent's memory. That's why, even by freeing all the memory you used during the update process, you won't reach back the same amount of free memory you had when starting the background agent. Seeing this, what I did in one of my app was to span the workload of the background agent across two executions. Basically, when the agents executes:
Check in the isolated storage if there's pending data to be processed. If not, just execute the update process and store all needed data in the isolated storage
If there is pending data (that is, in the next execution), generate the picture and clear the data
It means the picture will be generated only once every hour instead of once every 30 minutes, so use this workaround only if everything else fail.
The larger memory limit is for background audio agents, it clearly states that in the documentation. You're stuck with 11 MB, which can really be a big pain when you're trying to do something smart with pictures in the background.
480x800 adds a MB to your memory because it takes 4 bytes for every pixels, so in the end it's around 1.22MB. When compressed in JPEG, then yes - it makes sense that it's only around 100KB. But whenever you use WriteableBitmap, it gets loaded into memory.
One of the things you could try before forcing the GC.Collect as mentioned in another answer is to null things out even before they go out of scope - whether it's a BitmapImage or a WriteableBitmap. Other than that you can try removing the Image object from the Grid programmatically when you're done and setting the Source of it also to be null.
Are there any other WriteableBitmap, BitmapImage or Image objects you're not showing us?
Also, try without the debugger. I've read somewhere that it adds another 1-2MB which is a lot when you have only 11 MB. Although, if it crashes so quickly with the debugger, I wouldn't risk it even if it suddenly seams OK without the debugger. But just for testing purposes you can give it a shot.
Do you need to use the ImageUserControl? Can you try doing step by step creating Image and all the other objects, without the XAML, so you can measure memory in each and every step to see at which point it goes through the roof?