OutOfMemory Exception when loading lots of Images from Isolated storage - c#

EDIT: I keep getting OutOfMemoryException was unhandled,
I think it's how I am saving the image to isolated storage ,I think this is where I can solve my problem how do I reduce the size of the image before I save it? (added code where I save Image)
I am opening images from Isolated storage sometimes over 100 images and I want to loop over them images but I get a OutOfMemory Exception when there is around 100 to 150 images loaded in to a storyboard. How can I handle this exception, I have already brought down the resolution of the images. How can I handle this exception and stop my app from crashing?
I get the exception at this line here
image.SetSource(isStoreTwo.OpenFile(projectFolder + "\\MyImage" + i + ".jpg", FileMode.Open, FileAccess.Read));//images from isolated storage
here's my code
private void OnLoaded(object sender, RoutedEventArgs e)
{
IsolatedStorageFile isStoreTwo = IsolatedStorageFile.GetUserStoreForApplication();
try
{
storyboard = new Storyboard
{
//RepeatBehavior = RepeatBehavior.Forever
};
var animation = new ObjectAnimationUsingKeyFrames();
Storyboard.SetTarget(animation, projectImage);
Storyboard.SetTargetProperty(animation, new PropertyPath("Source"));
storyboard.Children.Add(animation);
for (int i = 1; i <= savedCounter; i++)
{
BitmapImage image = new BitmapImage();
image.SetSource(isStoreTwo.OpenFile(projectFolder + "\\MyImage" + i + ".jpg", FileMode.Open, FileAccess.Read));//images from isolated storage
var keyframe = new DiscreteObjectKeyFrame
{
KeyTime = KeyTime.FromTimeSpan(TimeSpan.FromMilliseconds(100 * i)),
Value = image
};
animation.KeyFrames.Add(keyframe);
}
}
catch (OutOfMemoryException exc)
{
//throw;
}
Resources.Add("ProjectStoryBoard", storyboard);
storyboard.Begin();
}
EDIT This is how I am saving the image to Isolated storage, I think this is where I can solve my problem, How do I reduce the size of the image when saving it to isolated storage?
void cam_CaptureImageAvailable(object sender, Microsoft.Devices.ContentReadyEventArgs e)
{
string fileName = folderName+"\\MyImage" + savedCounter + ".jpg";
try
{
// Save picture to the library camera roll.
//library.SavePictureToCameraRoll(fileName, e.ImageStream);
// Set the position of the stream back to start
e.ImageStream.Seek(0, SeekOrigin.Begin);
// Save picture as JPEG to isolated storage.
using (IsolatedStorageFile isStore = IsolatedStorageFile.GetUserStoreForApplication())
{
using (IsolatedStorageFileStream targetStream = isStore.OpenFile(fileName, FileMode.Create, FileAccess.Write))
{
// Initialize the buffer for 4KB disk pages.
byte[] readBuffer = new byte[4096];
int bytesRead = -1;
// Copy the image to isolated storage.
while ((bytesRead = e.ImageStream.Read(readBuffer, 0, readBuffer.Length)) > 0)
{
targetStream.Write(readBuffer, 0, bytesRead);
}
}
}
}
finally
{
// Close image stream
e.ImageStream.Close();
}
}
I would appreciate if you could help me thanks.

It doesn't matter how large your images are on disk because when you load them into memory they're going to be uncompressed. The memory required for the image will be approximately (stride * height). stride is width * bitsPerPixel)/8, and then rounded up to the next multiple of 4 bytes. So an image that's 1024x768 and 24 bits per pixel will take up about 2.25 MB.
You should figure out how large your images are, uncompressed, and use that number to determine the memory requirements.

You are getting the OutOfMemory Exception because you are storing all the images in memory at the same time in order to create your StoryBoard. I don't think you will be able to overcome the uncompressed bitmap size that the images require to be displayed on screen.
So to get past this we must think about your goal rather than trying to fix the error. If your goal is to show a new image in sequence every X ms then you have a few options.
Keep using StoryBoards but chain them using the OnCompleted event. This way you don't have to create them all at once but can just generate the next few. It might not be fast enough though if you're changing images every 100ms.
Use CompositionTarget.Rendering as mentioned in my answer here. This would probably take the least amount of memory if you just preload the next one (as opposed to having them all preloaded as your current solution does). You'd need to manually check the elapsed time though.
Rethink what you're doing. If you state what you are going after people might have more alternatives.

To answer the edit at the top of your post, try ImageResizer. There's a NuGet package, and a HanselBlog episode on it. Obviously , this is Asp.Net based, but I'm sure you could butcher it to work in your scenario.

Tackling these kind of problems at design layer usually works better.
Making application smart about the running environment via some configurations makes your application more robust. For example you can define some variables like image size, image count, image quality... based on available memory and set these variables at run-time in your App. So your application always works; fast on high memory machines and slow on low memory ones; but never crash. (Don't believe working in managed environment means no worry about the environment... Design always matters)
Also there are some known design patterns like Lazy Loading you can benefit from.

I don't know about windows phone in particular, but in .net winforms, you need to use a separate thread when doing a long-running task. Are you using a BackgroundWorker or equivalent? The finalizer thread can become blocked, which will prevent the resources for the images from being disposed. Using a separate thread from the UI thread will allow will allow the Dispose method to be run automatically.

Ok, an image (1024x768) has at least a memsize of 3 mb (argb)
Don't know how ObjectAnimationUsingKeyFrames works internal. Maybe you can force the gc by destroying the instances of BitmapImage (and KeyFrames) without loss of its data in the animation.
(not possible, see comments!)

Based on one of your comments, you are building a Time Lapse app. Commercial time-lapse apps for WP7 compress the images to video, not stills. e.g. Time Lapse Pro
The whole point of video playback is to reduce similar, or time-related, images to highly compressed stream that do not require massive amounts of memory to play back.
If you can add the ability to encode to video, in your app, you will avoid the problem of trying to emulate a video player (using 100s of single full-resolution frames as a flick-book).
Processing the images into video server-side may be another option (but not as friendly as in-camera).

Related

How i can free my application memory in c#?

in my c# application load form i make load like 30 picture and this make my memory full and if I add more pictures I'll get the message "that my memory is full"
how can I free my memory or increase the maximum used memory?
is there are any another way to get all my pictures to picture box without filling my memory?
1) how can I free my memory or increase the maximum used memory?
Are you receiving an OutOfMemoryException and your memory isn't filled up in your machine? This might be due to x86 compilation and you should change it to x64.
2) is there are any another way to get all my pictures to picture box without filling my memory?
Yes and no. If you are using Bitmap it will allocate the image in the memory. If you are recreating the image later with all the images together without compressing it, then you could be also losing some memory if you are not disposing the Bitmaps correctly. You could load them, compress them, and then dispose the original one, keeping the compressed/optimized one in memory.
Besides, remember to always dispose your Bitmap after using it. The GC usually takes care of them, but you should always dispose them:
using (Bitmap bitmap = new Bitmap("file.jpg")
{
// bitmap handling here
}
Of course, if you are displaying it in a picturebox you can't dispose it, but again, you are not being completely clear in your question.
Without your code I cannot provide you a better answer, and I'll gladly edit this one if you update your question.
Read:
Compress bitmap before sending over network
When do I need to use dispose() on graphics?
Update
Upon fixing your thread I've noticed you've given an image but the formatting was broken.
Read my answer and it should help you (specially the part of compression). You can also change to x64.
Update 2 - Compression code example
public void ExampleMethod()
{
pictureBox1.Image = GetCompressedFile("file.jpg", quality: 10);
}
private Image GetCompressedFile(string fileName, long quality)
{
using (Bitmap bitmap = new Bitmap(fileName))
{
return GetCompressedBitmap(bitmap, quality);
}
}
private Image GetCompressedBitmap(Bitmap bmp, long quality)
{
using (var mss = new MemoryStream())
{
EncoderParameter qualityParam = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, quality);
ImageCodecInfo imageCodec = ImageCodecInfo.GetImageEncoders().FirstOrDefault(o => o.FormatID == ImageFormat.Jpeg.Guid);
EncoderParameters parameters = new EncoderParameters(1);
parameters.Param[0] = qualityParam;
bmp.Save(mss, imageCodec, parameters);
return Image.FromStream(mss);
}
}
In the quality parameter you set the quality. 100 means 100%, 50 means 50%.
Try setting quality as 10 and see if it works.
Remove all Bitmap bmp = new Bitmap from the code, and display with your PictureBoxInstance.Image = GetCompressedFile(...);
Check the ExampleMethod().
Code based from: https://stackoverflow.com/a/48274706/4352946
You have to remember though, that even using Dispose, the memory won’t be freed right away, you should wait the GC for that.
P.S: compressing the file on-the-go you MIGHT end up with both images (the compressed and uncompressed) in the memory

Converting Image in c#

Edit: SOLVED! Please see my answer down below for details.
I was unable to find an answer to the original question but I found an alternate solution
This question may be asked somewhere else but I have been searching for days and can't find anything that helps.
Question: I need to convert "Stream" to "image(bgr, byte)" in one go, Is there a way/command to convert directly from System.Drawing.Image.FromStream to Emgu.CV.Image(Bgr, Byte) without converting from stream to image to bitmap to image(bgr, byte)?
Information: I'm coding in c# in Visual Studio 2010 as part of my dissertation project.
I am taking a image stream from an IP camera on a network and applying many algorithms to detect faces/extract facial features and recognise an individuals face. On my laptops local camera I can achieve FPS of about 25~ (give or take) including algorithms because I don't have to convert the image. For an IP camera stream I need to convert it many times to achieve the desired format and the result is around 5-8fps.
(I know my current method is extremely inefficient which is why I'm here, I'm actually converting an image 5 times total (even gray scaling too), actually only using half of my processors memory (i7, 8gb RAM)). It does have to be image(bgr, byte) as that is the only format the algorithms will function with.
The code I'm using to get the image:
//headers
using System.IO
using System.Threading;
using System.Net;
//request a connection
req = (HttpWebRequest)HttpWebRequest.Create(cameraUrl);
//gives chance for timeout for errors to occur or loss of connection
req.AllowWriteStreamBuffering = true;
req.Timeout = 20000;
//retrieve response (if successfull)
res = req.GetResponse();
//image returned
stream = res.GetResponseStream();
I have alot of stuff in the background managing connections, data, security etc which I have shortened to the above code.
My current code to covert the image to the desired output:
//Convert stream to image then to bitmap
Bitmap bmpImage = new Bitmap(System.Drawing.Image.FromStream(stream));
//Convert to emgu image (desired goal)
currentFrame = new Emgu.CV.Image<Bgr, Byte>(bmpImage);
//gray scale for other uses
gray = currentFrame.Convert<Gray, Byte>();
I understand there is a method to save an image locally temporarily but I would need to avoid that for security purposes. I'm looking more for a direct conversion to help save processing power.
Am I overlooking something? All help is appreciated.
Thanks for reading. (I will update this if anyone requests any more details)
-Dave
You've got a couple potential bottlenecks, not the least of which is that you're probably jpeg decoding the stream into an image and then converting that into a bitmap and then into an openCV image.
One way around this is to bypass the .NET imaging entirely. This would involve trying to use libjpeg directly. There's a free port of it here in C#, and IIRC you can hook into it to get called on a per-scanline basis to fill up a buffer.
The downside is that you're decoding JPEG data in managed code which will run at least 1.5X slower than equivalent the C, although quite frankly I would expect network speed to dwarf this immensely.
OpenCV should be able to read jpeg images directly (wanna guess what they use under the hood? Survey says: libjpeg), which means that you can buffer up the entire stream and hand it to OpenCV and bypass the .NET layer entirely.
I believe I found the answer to my problem. I have dabbled using Vano Maisuradze's idea of processing in memory which improved the fps a tiny margin (not immediately noticable without testing). And also thanks to Plinths answer I have a understanding of Multi-Threading and I can optimise this as I progress as I can split the algorithms up to work in parallel.
What I think is my cause is the networking speed! not the actual algorithm delay. As pointed out by Vano with the stopwatch to find the speed the algorithms didn't actually consume that much. So with and without the algorithms the speed is about the same if I optimise using threading so the next frame is being collected as the previous one finishes processing.
I did some testing on some physical Cisco routers and got the same result if a bit slower messing round with clock speeds and bandwidths which was noticeable. So I need to find out a way to retrieve frames over networks faster, Very big thank you to everyone who answered who helped me understand better!
Conclusion:
Multi-threading to optimise
Processing in memory instead of converting constantly
Better networking solutions (Higher bandwidth and speeds)
Edit: The code to retrieve an image and process in memory for anyone who finds this looking for help
public void getFrames(object sender, EventArgs e)
{//Gets a frame from the IP cam
//Replace "IPADDRESS", "USERNAME", "PASSWORD"
//with respective data for your camera
string sourceURL = "http://IPADDRESS/snapshot.cgi?user=USERNAME&pwd=PASSWORD";
//used to store the image retrieved in memory
byte[] buffer = new byte[640 * 480];
int read, total = 0;
//Send a request to the peripheral via HTTP
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(sourceURL);
WebResponse resp = req.GetResponse();
//Get the image capture after recieving a request
//Note: just a screenshot not a steady stream
Stream stream = resp.GetResponseStream();
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}//While End
//Convert memory (byte) to bitmap and store in a picturebox
pictureBox1.Image = (Bitmap)Bitmap.FromStream(new MemoryStream(buffer, 0, total));
}//getFrames End
private void button1_Click(object sender, EventArgs e)
{//Trigger an event to start running the function when possible
Application.Idle += new EventHandler(getFrames);
}//Button1_Click End
You can save several image in memory (buffer) and then start processing from buffer.
Something like this:
//Convert stream to image then to bitmap
Bitmap bmpImage = new Bitmap(System.Drawing.Image.FromStream(stream));
//Convert to emgu image (desired goal)
currentFrame = new Emgu.CV.Image<Bgr, Byte>(bmpImage);
//gray scale for later use
gray = currentFrame.Convert<Gray, Byte>();
SaveToBuffer(gray);
Queue<Emgu.CV.Image<Gray, Byte>> buffer = new Queue<Emgu.CV.Image<Gray, Byte>>();
bool canProcess = false;
// ...
private void SaveToBuffer(Emgu.CV.Image<Gray, Byte> img)
{
buffer.Enqueue(img);
canProcess = buffer.Count > 100;
}
private void Process()
{
if(canProcess)
{
buffer.Dequeue();
// Processing logic goes here...
}
else
{
// Buffer is still loading...
}
}
But note that you will need enough RAM to store images in memory and also you should adjust buffer size to meat your requirements.

Saving Surface to Bitmap and optimizing DirectX screen capture in C#

after a whole day of testing I came up with this code, which captures current screen using DirectX (SlimDX) and saves it into a file:
Device d;
public DxScreenCapture()
{
PresentParameters present_params = new PresentParameters();
present_params.Windowed = true;
present_params.SwapEffect = SwapEffect.Discard;
d = new Device(new Direct3D(), 0, DeviceType.Hardware, IntPtr.Zero, CreateFlags.SoftwareVertexProcessing, present_params);
}
public Surface CaptureScreen()
{
Surface s = Surface.CreateOffscreenPlain(d, Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height, Format.A8R8G8B8, Pool.Scratch);
d.GetFrontBufferData(0, s);
return s;
}
Then I do the following:
DxScreenCapture sc = new DxScreenCapture();
..code here
private void button1_Click(object sender, EventArgs e)
{
Stopwatch stopwatch = new Stopwatch();
// Begin timing
stopwatch.Start();
Surface s = sc.CaptureScreen();
Surface.ToFile(s, #"c:\temp\test.png", ImageFileFormat.Png);
s.Dispose();
stopwatch.Stop();
textBox1.Text = ("Elapsed:" + stopwatch.Elapsed.TotalMilliseconds);
}
The results are:
0. when I don't save surface: avg. elapsed time: 80-90ms
1. when I also save Surface to BMP file: format: ImageFileFormat.Bmp , avg. elapsed time: 120ms, file size: 7mb
2. when I also save Surface to PNG file: format: ImageFileFormat.Png , avg. elapsed time: 800ms, file size: 300kb
The questions are:
1. Is it possible to optimise current image capture? According to this article - Directx screen capture should be faster than GDI. For me, GDI usually takes 20ms to get a "Bitmap", whereas it takes 80ms to get "Surfare" using DX (both without saving).
http://www.codeproject.com/Articles/274461/Very-fast-screen-capture-using-DirectX-in-Csharp
2a. How to save Surface to PNG image format faster? When I save surface to 7mb BMP file it takes almost 6 times less time, than when I save the same surface to 300kb PNG file..
2b. Is it possible to save Surface directly to Bitmap so I don't have to create temporary files?
So I don't have to do following: Surface -> image file; image file open -> bitmap;, but instead: Surface -> bitmap
that's all for now. I'll gladly accept any tips, thanks!
Edit:
Just solved 2b by doing:
Bitmap bitmap = new Bitmap(SlimDX.Direct3D9.Surface.ToStream(s, SlimDX.Direct3D9.ImageFileFormat.Bmp));
Edit2:
Surface.ToFile(s, #"C:\temp\test.bmp", ImageFileFormat.Bmp);
Bitmap bitmap = new Bitmap(#"C:\temp\test.bmp");
is faster than:
Bitmap bitmap = new Bitmap(SlimDX.Direct3D9.Surface.ToStream(s, SlimDX.Direct3D9.ImageFileFormat.Bmp));
by 100 ms!!! Yeah, I couldn't believe my eyes too ;) I don't like the idea of temporary file creation, but a 50% performance increase (100-200ms instead of 200-300+) is a very good thing.
If you don't want to use SlimDX library you can also try
public Bitmap GimmeBitmap(Surface s)
{
GraphicsStream gs = SurfaceLoader.SaveToStream(ImageFileFormat.Bmp, s);
return new Bitmap(gs);
}
and try the same for .png - I did not test performance but it have to be faster than using disc temporary file :)
and as for 1st question - try to only once create surface and then on every screenshot only put into it device's buffer data and create the bitmap
d.GetFrontBufferData(0, s);
return new Bitmap(SurfaceLoader.SaveToStream(ImageFileFormat.Bmp, s));
this should save you some time :)
If performance really is an issue, you should consider writing your code in C++ instead. Therefor you dont need an external library but can directly access the backend-buffer of your video card via Windows-API + DirectX.
Accessing the backend(-video)-buffer is a lot faster than reading from the frontend-buffer.
To optimize performance (which also awnsers your question 1) use multithreading (see TPL or threading depending on your needs).
Here is an inside of how to do it in C++ CodeProject examples in C++.
From my personal experience, DirectX was by far the fastest.
These steps
1. reading backend-buffer into a bitmap to process the data
2. spawning new thread to repeat step 1 while previous thread is still busy
take about 10-40ms (together) - implemented in C++ (on NVIDIA GeForce GTX 970M) and depending on the current workload of the hardware
Possible middle course
If you want to stick with C# but also need the performance, writing a C++-dll for .NET (see .NET Programming with C++/CLI (Visual C++)) which reads the video buffer and returns the data to your C#-Code will do the trick.

WinRT image handling

A friend and I spent the better part of last night nearly tearing our hair out trying to work with some images in a metro app. We got images into the app with the share charm, and then I wanted to do some other work with them, cropping the images and saving them back into the appdata folder. This proved extremely frustrating.
My question, at the end of all this, is going to be "What's the proper way of doing this, without feeling like I'm hammering together a bunch of mismatched jigsaw puzzle pieces?"
When sharing multiple images with the app, they come in as a list of Windows.Storage.StorageFiles. Here's some code used to handle that.
var storageItems = await _shareOperation.Data.GetStorageItemsAsync();
foreach (StorageFile item in storageItems)
{
var stream = await item.OpenReadAsync();
var properties = await item.Properties.GetImagePropertiesAsync();
var image = new WriteableBitmap((Int32)properties.Width, (Int32)properties.Height);
image.SetSource(stream);
images.Add(image);
}
Some searching online has indicated that currently, a Windows.UI.Xaml.Media.Imaging.WriteableBitmap is the only thing capable of letting you access the pixel data in the image. This question includes a helpful answer full of extension methods for saving images to a file, so we used those.
Our problems were the worst when I tried opening the files again later. I did something similar to before:
var files = await ApplicationData.Current.LocalFolder.GetFilesAsync();
foreach (var file in files)
{
var fileStream = await file.OpenReadAsync();
var properties = await file.Properties.GetImagePropertiesAsync();
var bitmap = new WriteableBitmap((Int32)properties.Width, (Int32)properties.Height);
bitmap.SetSource(fileStream);
System.IO.Stream stream = bitmap.PixelBuffer.AsStream();
Here comes a problem. How long is this stream, if I want the bytes out of it?
// CRASH! Length isn't supported on an IRandomAccessStream.
var pixels = new byte[fileStream.Length];
Ok try again.
var pixels = new byte[stream.Length];
This works, except... if the image is compressed, the stream is shorter than you would expect, so you will eventually get an out of bounds exception. For now pretend it's an uncompressed bitmap.
await _stream.ReadAsync(pixels, 0, pixels.Length);
Well guess what. Even though I said bitmap.SetSource(fileStream); in order to read in the data, my byte array is still full of zeroes. I have no idea why. If I pass this same bitmap into a my UI through the sample data group, the image shows up just fine. So it has clearly got the pixel data in that bitmap somewhere, but I can't read it out of bitmap.PixelBuffer? Why not?
Finally, here's what ended up actually working.
var decoder = await BitmapDecoder.CreateAsync(BitmapDecoder.PngDecoderId, fileStream);
var data = await decoder.GetPixelDataAsync();
var bytes = data.DetachPixelData();
/* process my data, finally */
} // end of that foreach I started a while ago
So now I have by image data, but I still have a big problem. In order to do anything with it, I have to make assumptions about its format. I have no idea whether it's rgba, rgb, abgr, bgra, whatever they can be. If I guess wrong my processing just fails. I've had dozens of test runs spit out zero byte and corrupted images, upside down images (???), wrong colors, etc. I would have expected to find some of this info in the properties that I got from calling await file.Properties.GetImagePropertiesAsync();, but no luck. That only contains the image width and height, plus some other useless things. Minimal documentation here.
So, why is this process so painful? Is this just reflecting the immaturity of the libraries right now, and can I expect it to get better? Or is there already some standard way of doing this? I wish it were as easy as in System.Drawing. That gave you all the data you ever needed, and happily loaded any image type correctly, without making you deal with streams yourself.
From what I have seen - when you are planning on loading the WriteableBitmap with a stream - you don't need to check the image dimensions - just do new WriteableBitmap(1,1), then call SetSource().
Not sure why you were thinking var pixels = new byte[fileStream.Length]; would work, since the fileStream has the compressed image bytes and not a pixel array.
You might need to seek to the beginning of the stream to get the pixels array:
var pixelStream = pixelBuffer.AsStream();
var bytes = new byte[this.pixelStream.Length];
this.pixelStream.Seek(0, SeekOrigin.Begin);
this.pixelStream.Read(bytes, 0, Bytes.Length);
I had started working on a WinRT port of WriteableBitmapEx - maybe it could help you: http://bit.ly/WriteableBitmapExWinRT. I have not tested it well and it is based on an older version of WBX, but it is fairly complete in terms of feature support. Might be a tad slower than it is possible too.

Why does listbox consume so much memory when images are aquired from Isolated Storage as opposed to an image embedded in the assembly?

I have created a List Box in Windows Phone Mango.
I will be using it to display many photos in one long list.
For testing purposes I am just using one photo and repeating it for each index.
When loading the image from Isolated Storage 100 images consume 170MB of memory.
When I embbed the same image in the assembly as a resource (ie /Images/image1.jpg) 10'000 images only consume 40MB. In fact it never goes above 40MB, whatever is happeing here (UI Virtualization?) it works well.
I have to use Isolated Storage for my images, as image updates are downloaded periodically to the phone. Can I make it perform the same as an embedded image?
Can someone explain to me;
When I aquire the image from Isolated Storage, why does it uses so much memory, the more I load the higher it gets?
When I aquire it from the images folder, when it is part of the assembly, how can it load tens of thousands of images and the memory never increases, is this UI Virtualization?
Thanks for any help in advance.
Here is my code. (Only started developing 6 months ago if it looks a bit dodgy!)
//GET IMAGE FROM ISOLATED STORAGE
IsolatedStorageFile insISF = IsolatedStorageFile.GetUserStoreForApplication();
for (int i = 0; i < 100; i++)
{
byte[] byte1;
using (IsolatedStorageFileStream isfs = insISF.OpenFile("\\Photos\\image1.jpg", FileMode.Open, FileAccess.Read))
{
byte1 = new byte[isfs.Length];
isfs.Read(byte1, 0, byte1.Length);
isfs.Close();
}
Image image = new Image();
MemoryStream ms = new MemoryStream(byte1);
BitmapImage bi = new BitmapImage();
bi.SetSource(ms);
image.Source = bi;
listBox2.Items.Add(image);
//NOTE I HAVE TRIED "ms.Dispose();" HERE BUT IT DOES NOT HELP.
}
//END
//GET IMAGE FROM IMAGES FOLDER AS PART OF ASSEMBLY
for (int i = 0; i < 10000; i++)
{
BitmapImage bi = new BitmapImage(new Uri("/Images/image1.jpg", UriKind.Relative));
Image image = new Image();
image.Source = bi;
listBox2.Items.Add(image);
}
//END
When you load image from Isolated Storage, you load it byte-by-byte, e.i. all images are loaded completely.
In opposite, when you create BitmapImage, it uses DelayCreation option by default. So, application loaded images that only on screen now. See CreateOption enum for more details.
Off-thread decoding of images on Mango
Could it be the IsolatedStorageFileStream and MemoryStream objects???
Try using GC to monitor the memory usage of each iteration in your for loop.
The virtualization on your listbox should work the same in either case.
Btw, you should use the using construct on MemoryStream as well.
In your code the byte array is disposed by the Garbage Collector, the only objects that should be explicitly disposed by the garbage collector once they are out of scope are the one that can be included in a using statement...
I never tried, but I think that if you read the images directly from the stream instead of reading it from the stream, writing it to an array and re-reading it from an array, and including the stream in a using statement the memory should be freed.
Hope this help.

Categories