Image resize optimization causes memory usage to sky rocket - c#

I made a function that takes an original image and resizes it to 3 different zoom scales -> 16x, 10x, and 4x. For a better understanding, continue reading this paragraph. Let's say the original image is 1000x1000. I declare that at 1x zoom it's dimensions will be 50x50. That means 4x zoom will be 200x200, 10x zoom will be 500x500 and 16x zoom will be 800x800. So my function needs to resize the original 1000x1000 down to 800x800, then down to 500x500, then down to 200x200. Please note I have done this successfully and my question is regarding memory usage.
Below I have two methods of doing so. Both methods work, but one causes a HUGE memory usage bloat using approximately 3x/4x more memory than the other... I like the 2nd method better, because it loads significantly faster than the first method because it's not resizing each of the 3 images from the original image, instead it's resizing them from the previously resized image.
Notes: I'm using Xcode Instruments to measure memory usage. The ImageResizer Class contains a function called "Resize" which resizes the Image.
Method 1.)
public List<UIImage> InitImageList_BFObjects ( UIImage image, SizeF frameSize )
{
List<UIImage> listOfImages = new List<UIImage>();
for ( int i = 0; i < 3; i++ )
{
if ( i == 0 )
zoomScale = 16f;
else if ( i == 1 )
zoomScale = 10f;
else// if ( i == 2 )
zoomScale = 4f;
Resizer = new ImageResizer(image);
Resizer.Resize(frameSize.Width * zoomScale, frameSize.Height * zoomScale);
UIImage resizedImage = Resizer.ModifiedImage;
listOfImages.Insert(0, resizedImage);
}
return listOfImages;
}
Method 1 works and uses very little memory usage. I ran this with a group of about 20 images. My app had about 14mb of memory usage after this loaded (using Xcodes Instruments to examine memory usage)
Method 2.)
public List<UIImage> InitImageList_BFObjects ( UIImage image, SizeF frameSize )
{
List<UIImage> listOfImages = new List<UIImage>();
for ( int i = 0; i < 3; i++ )
{
if ( i == 0 )
zoomScale = 16f;
else if ( i == 1 )
zoomScale = 10f;
else// if ( i == 2 )
zoomScale = 4f;
if ( listOfImages.Count == 0 )
{
Resizer = new ImageResizer(image);
Resizer.Resize(frameSize.Width * zoomScale, frameSize.Height * zoomScale);
UIImage resizedImage = Resizer.ModifiedImage;
listOfImages.Insert(0, resizedImage);
}
else
{
// THIS LINE CONTAINS THE MAIN DIFFERENCE BETWEEN METHOD 1 AND METHOD 2
// Notice how it resizes from the most recent image from listOfImages rather than the original image
Resizer = new ImageResizer(listOfImages[0]);
Resizer.Resize(frameSize.Width * zoomScale, frameSize.Height * zoomScale);
UIImage resizedImage = Resizer.ModifiedImage;
listOfImages.Insert(0, resizedImage);
}
}
return listOfImages;
}
Method 2 works but the memory usage sky rockets! I ran this with the same group of about 20 images. My app had over 60mb of memory usage after this loaded (using Xcodes Instruments to examine memory usage) Why is the memory usage so high? What is it about Method 2 that causes the memory to sky rocket? It's almost as if a variable is not getting cleaned up properly
* Additional Information, ImageResizer Class **
I cut out the non-needed functions from my ImageResizer Class and renamed it "ImageResizer_Abridged". I even switched over to using this class to make sure I didn't accidentally cut out anything needed.
public class ImageResizer_Abridged
{
UIImage originalImage = null;
UIImage modifiedImage = null;
public ImageResizer_Abridged ( UIImage image )
{
this.originalImage = image;
this.modifiedImage = image;
}
/// <summary>
/// strech resize
/// </summary>
public void Resize( float width, float height )
{
UIGraphics.BeginImageContext( new SizeF( width, height ) );
//
modifiedImage.Draw( new RectangleF( 0,0, width, height ) );
modifiedImage = UIGraphics.GetImageFromCurrentImageContext();
//
UIGraphics.EndImageContext();
}
public UIImage OriginalImage
{
get
{
return this.originalImage;
}
}
public UIImage ModifiedImage
{
get
{
return this.modifiedImage;
}
}
}
I created a simplified test project showing this problem *
Here is a dropbox link to the project: https://www.dropbox.com/s/4w7d87nn0aafph9/TestMemory.zip
Here is Method 1's Xcode Instruments screen shot as evidence (9 mb memory usage):
http://i88.photobucket.com/albums/k194/lampshade9909/AllImagesResizedFromOriginalImage_zps585228c6.jpg
Here is Method 2's Xcode Instruments screens hot as evidence (55 mb memory usage):
http://i88.photobucket.com/albums/k194/lampshade9909/SignificantIncreaseInMemoryUsage_zps19034bad.jpg
Below is the code block needed to run the test project
// Initialize My List of Images
ListOfImages = new List<UIImage>();
for ( int i = 0; i < 30; i++ )
{
// Create a UIImage Containing my original Image
UIImage originalImage = UIImage.FromFile ("b2Bomber.png");
float newWidth = 100f;
float newHeight = 40f;
float zoomScale;
float resizedWidth, resizedHeight;
UIImage resizedImage1;
UIImage resizedImage2;
// Basically, I want to take the originalImage Image and resize it twice.
// Method 1.) Resize the originalImage and save it as ResizedImage1. Resize the originalImage and save it as ResizedImage2. We're finished!
// Method 2.) Resize the originalImage and save it as ResizedImage1. Resize ResizedImage1 and save it as ResizedImage2. We're finished!
// The pro to Method 1 is that we get the best possible quaility on all resized images. The con is, this takes a long time if we're doing dozens of very large images
// The pro to Method 2 is that it's faster than Method 1. This is why I want to use Method 2, it's speed. But it has a HUGE con, it's memory usage.
// Please run this project on an iPad connected to XCodes Instruments to monitor memory usage and see what I mean
zoomScale = 10f;
resizedWidth = newWidth*zoomScale;
resizedHeight = newHeight*zoomScale;
UIGraphics.BeginImageContext( new SizeF( resizedWidth, resizedHeight ) );
originalImage.Draw( new RectangleF( 0, 0, resizedWidth, resizedHeight ) );
resizedImage1 = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
zoomScale = 4f;
resizedWidth = newWidth*zoomScale;
resizedHeight = newHeight*zoomScale;
UIGraphics.BeginImageContext( new SizeF( resizedWidth, resizedHeight ) );
// Run this project on an iPad and examine the memory usage in XCode's Instruments.
// The Real Memory Usage will be aroud 9 MB.
// Uncomment this "originalImage.Draw" line to see this happening, make sure to comment out the "resizedImage1.Draw" line
// originalImage.Draw( new RectangleF( 0, 0, resizedWidth, resizedHeight ) );
// Run this project on an iPad and examine the memory usage in XCode's Instruments.
// The Real Memory Usage will be aroud 55 MB!!
// My question is, why does the memory sky rocket when doing this, and how can I prevent the memory from sky rocketing??
// My App requires me to resize around a hundred images and I want to be able to resize an already resized image (like in this example) without the memory usage sky rocketing like this...
// Uncomment this "resizedImage1.Draw" line to see this happening, make sure to comment out the "originalImage.Draw" line
resizedImage1.Draw( new RectangleF( 0, 0, resizedWidth, resizedHeight ) );
resizedImage2 = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
// Add my resized images to the list of Images
ListOfImages.Add (resizedImage1);
ListOfImages.Add (resizedImage2);
}

I'm not sure about your Resize code but I have seen Scale do strange thing. It's not really strange, once you dig into it, but it's definitively not obvious.
Creating an UIImage can be very cheap, memory wise, as long as its backing CGImage is not created. IOW iOS might not immediately allocate a new CGImage backing image that match the new size. That allocation will be differed until the CGImage is needed.
In such case it's possible for some code (like your method 1) to require almost no additional memory when scaling up. However your 2nd method is using a scaled-up image (and will need to allocate the backing CGImage to do so) so it end up requiring the memory earlier.
How can you check for this ?
Compare your resizedImage.Size with the resizedImage.CGImage.Size. If they don't match then you're likely hitting caching.
Notes
I say might because the caching logic is unknown (undocumented). I know that this can differ from running on the simulator and devices - and it also vary between iOS versions;
Caching is a good thing - but it can be surprising :-) I just wish this was documented.

Have you checked whether Resizer implements the Dispose() method? I don't see you disposing it anywhere.
I believe your new code line is implementing the Zoom on the entire image, hence the increased memory usage.
Resizer is zooming the ENTIRE image at the new zoomed in scale, so that an incoming 4MB image is zoomed to 8MB, 16MB and 32MB, consuming your memory.

UIImage implements IDisposable, so something will have to Dispose of it eventually. The Resize method appears to "lose" the reference to modifiedImage, so I'm going to Dispose() it. Hopefully the caller is doing the same to all images in the list returned by InitImageList_BFObjects when it's done with them. Or that class implements IDisposable and it's kicked up the line of who has to deal with it. But rest assured, these images being created need to be Dispose()d somewhere sometime.
public class ImageResizer_Abridged
{
private readonly UIImage originalImage;
private UIImage modifiedImage;
public ImageResizer_Abridged(UIImage image)
{
this.originalImage = image;
this.modifiedImage = image;
}
/// <summary>
/// stretch resize
/// </summary>
public void Resize(float width, float height)
{
UIGraphics.BeginImageContext(new SizeF(width, height));
//
var oldImage = this.modifiedImage;
this.modifiedImage.Draw(new RectangleF(0, 0, width, height));
this.modifiedImage = UIGraphics.GetImageFromCurrentImageContext();
oldImage.Dispose();
//
UIGraphics.EndImageContext();
}
public UIImage OriginalImage
{
get
{
return this.originalImage;
}
}
public UIImage ModifiedImage
{
get
{
return this.modifiedImage;
}
}
}

Related

How to display real time updates to an image in Unity?

I am trying to create an application that generates a bitmap image every frame based on user actions and have it display that image to the screen. I would like the application to also be able to update that image in unity in real time as soon as the user makes another action.
I have created an application that does this and it works. However, it is veryyyy slow. My Update() method is attached below.
My idea was:
Capture user data (mouse location).
Convert that data into a special signal format that another program recognizes.
Have that program return a bitmap image.
Use that bitmap as a texture and update the existing texture with the new image.
Code:
UnityEngine.Texture2D oneTexture;
Bitmap currentBitmap;
private int frameCount = 0;
void Update()
{
// Show mouse position in unity environment
double xValue = Input.mousePosition.x;
double yValue = Screen.height - Input.mousePosition.y;
myPoints = "" + xValue + "," + yValue + Environment.NewLine;
// Show heatmap being recorded.
signals = Program.ConvertStringToSignalsList(myPoints);
currentBitmap = Program.CreateMouseHeatmap(Screen.width, Screen.height, signals);
// Update old heatmap texture.
UpdateTextureFromBitmap();
ri.texture = oneTexture;
ri.rectTransform.sizeDelta = new Vector2(Screen.width, Screen.height);
frameCount++;
// Write points to Database.
StartCoroutine(WriteToDB(xValue, yValue)); // <<<<< Comment out when playback.
}
private void UpdateTextureFromBitmap()
{
// Convert Bitmap object into byte array instead of creating actual
// .bmp image file each frame.
byte[] imageBytes = ImageToBytes(currentBitmap);
BMPLoader loader = new BMPLoader();
BMPImage img = loader.LoadBMP(imageBytes);
// Only initialize the Texture once.
if (frameCount == 0)
{
oneTexture = img.ToTexture2D();
}
else
{
Color32[] imageData = img.imageData;
oneTexture.SetPixels32(imageData);
oneTexture.Apply();
}
}
I was wondering if someone could help me improve the rate at which the image updates to the screen? I know that it is possible to make this program much faster but I am so new to unity and C# that I don't know how to make that happen. Also if there is a completely different way that I should be going about doing this then I am open to that too. Any help would be appreciated. Thanks!
Also, below is a screenshot of the Profiler showing the breakdown of CPU Usage. Currently it looks like every frame is taking about 500ms.

Memory efficient bitmap handling in mono for android

I got an application, which allows the user to take a picture. After the picture has been taken, the user can send it to my webserver. But before i do this, it needs to resize the bitmap because i like to have consistent sizes send to my webserver.
Anyway, the code i use to load the bitmap into memory and then manipulate it, does seem to occupy a lot of memory. This code is currently being used :
/*
* This method is used to calculate image size.
* And also resize/scale image down to 1600 x 1200
*/
private void ResizeBitmapAndSendToWebServer(string album_id) {
Bitmap bm = null;
// This line is taking up to much memory each time..
Bitmap bitmap = MediaStore.Images.Media.GetBitmap(Android.App.Application.Context.ApplicationContext.ContentResolver,fileUri);
/*
* My question is : Could i do the next image manipulation
* before i even load the bitmap into memory?
*/
int width = bitmap.Width;
int height = bitmap.Height;
if (width >= height) { // <-- Landscape picture
float scaledWidth = (float)height / width;
if (width > 1600) {
bm = Bitmap.CreateScaledBitmap (bitmap, 1600, (int)(1600 * scaledWidth), true);
} else {
bm = bitmap;
}
} else {
float scaledHeight = (float)width / height;
if (height > 1600) {
bm = Bitmap.CreateScaledBitmap (bitmap, (int)(1600 * scaledHeight), 1600 , true);
} else {
bm = bitmap;
}
}
// End of question code block.
MemoryStream stream = new MemoryStream ();
bitmap.Compress (Bitmap.CompressFormat.Jpeg, 80, stream);
byte[] bitmapData = stream.ToArray ();
bitmap.Dispose ();
app.api.SendPhoto (Base64.EncodeToString (bitmapData, Base64Flags.Default), album_id);
}
What would be a good and clean way for solving such memory problems?
EDIT 1 :
After reading other posts, it became clear to me that i am doing some inefficient things with my code. This is, in steps, what i have been doing :
Load full bitmap into memory.
Decide wether it is landscape or not.
Then create new bitmap with the right dimensions.
Then converting this bitmap into byte array
Disposing the initial bitmap. (But never remove the scaled bitmap out of memory).
What i really should be doing :
Determine real bitmap dimensions without loading it into memory with :
private void FancyMethodForDeterminingImageDimensions() {
BitmapFactory.Options options = new BitmapFactory.Options();
options.InJustDecodeBounds = true;
BitmapFactory.DecodeFile(fileUri.Path, options);
// Now the dimensions of the bitmap are known without loading
// the bitmap into memory.
// I am not further going to explain this, i think the purpose is
// explaining enough.
int outWidth = options.OutWidth;
int outHeight = options.OutHeight;
}
If set to true, the decoder will return null (no bitmap), but the
out... fields will still be set, allowing the caller to query the
bitmap without having to allocate the memory for its pixels.
Now i know the real dimenions. So i can downsample it before i load it into memory.
(in my case) Convert bitmap to base64 string and send it.
Dispose everything so the memory gets cleared.
I can't currently test this, because i am not on my development machine. Can anyone give me some feedback if this is the right way? It will be appreciated.
private void ResizeBitmapAndSendToWebServer(string album_id) {
BitmapFactory.Options options = new BitmapFactory.Options ();
options.InJustDecodeBounds = true; // <-- This makes sure bitmap is not loaded into memory.
// Then get the properties of the bitmap
BitmapFactory.DecodeFile (fileUri.Path, options);
Android.Util.Log.Debug ("[BITMAP]" , string.Format("Original width : {0}, and height : {1}", options.OutWidth, options.OutHeight) );
// CalculateInSampleSize calculates the right aspect ratio for the picture and then calculate
// the factor where it will be downsampled with.
options.InSampleSize = CalculateInSampleSize (options, 1600, 1200);
Android.Util.Log.Debug ("[BITMAP]" , string.Format("Downsampling factor : {0}", CalculateInSampleSize (options, 1600, 1200)) );
// Now that we know the downsampling factor, the right sized bitmap is loaded into memory.
// So we set the InJustDecodeBounds to false because we now know the exact dimensions.
options.InJustDecodeBounds = false;
// Now we are loading it with the correct options. And saving precious memory.
Bitmap bm = BitmapFactory.DecodeFile (fileUri.Path, options);
Android.Util.Log.Debug ("[BITMAP]" , string.Format("Downsampled width : {0}, and height : {1}", bm.Width, bm.Height) );
// Convert it to Base64 by first converting the bitmap to
// a byte array. Then convert the byte array to a Base64 String.
MemoryStream stream = new MemoryStream ();
bm.Compress (Bitmap.CompressFormat.Jpeg, 80, stream);
byte[] bitmapData = stream.ToArray ();
bm.Dispose ();
app.api.SendPhoto (Base64.EncodeToString (bitmapData, Base64Flags.Default), album_id);
}

Finding an Image Inside Another Image

I'm trying to build an application that solves a puzzle (trying to develop a graph algorithm), and I don't want to enter sample input by hand all the time.
Edit: I'm not trying to build a game. I'm trying to build an agent which plays the game "SpellSeeker"
Say I have an image (see attachment) on the screen with numbers in it, and I know the locations of the boxes, and I have the exact images for these numbers. What I want to do is simply tell which image (number) is on the corresponding box.
So I guess I need to implement
bool isImageInsideImage(Bitmap numberImage,Bitmap Portion_Of_ScreenCap) or something like that.
What I've tried is (using AForge libraries)
public static bool Contains(this Bitmap template, Bitmap bmp)
{
const Int32 divisor = 4;
const Int32 epsilon = 10;
ExhaustiveTemplateMatching etm = new ExhaustiveTemplateMatching(0.9f);
TemplateMatch[] tm = etm.ProcessImage(
new ResizeNearestNeighbor(template.Width / divisor, template.Height / divisor).Apply(template),
new ResizeNearestNeighbor(bmp.Width / divisor, bmp.Height / divisor).Apply(bmp)
);
if (tm.Length == 1)
{
Rectangle tempRect = tm[0].Rectangle;
if (Math.Abs(bmp.Width / divisor - tempRect.Width) < epsilon
&&
Math.Abs(bmp.Height / divisor - tempRect.Height) < epsilon)
{
return true;
}
}
return false;
}
But it returns false when searching for a black dot in this image.
How can I implement this?
I'm answering my question since I've found the solution:
this worked out for me:
System.Drawing.Bitmap sourceImage = (Bitmap)Bitmap.FromFile(#"C:\SavedBMPs\1.jpg");
System.Drawing.Bitmap template = (Bitmap)Bitmap.FromFile(#"C:\SavedBMPs\2.jpg");
// create template matching algorithm's instance
// (set similarity threshold to 92.5%)
ExhaustiveTemplateMatching tm = new ExhaustiveTemplateMatching(0.921f);
// find all matchings with specified above similarity
TemplateMatch[] matchings = tm.ProcessImage(sourceImage, template);
// highlight found matchings
BitmapData data = sourceImage.LockBits(
new Rectangle(0, 0, sourceImage.Width, sourceImage.Height),
ImageLockMode.ReadWrite, sourceImage.PixelFormat);
foreach (TemplateMatch m in matchings)
{
Drawing.Rectangle(data, m.Rectangle, Color.White);
MessageBox.Show(m.Rectangle.Location.ToString());
// do something else with matching
}
sourceImage.UnlockBits(data);
The only problem was it was finding all (58) boxes for said game. But changing the value 0.921f to 0.98 made it perfect, i.e. it finds only the specified number's image (template)
Edit: I actually have to enter different similarity thresholds for different pictures. I found the optimized values by trying, in the end I have a function like
float getSimilarityThreshold(int number)
A better approach is to build a custom class which holds all the information you need instead of relying on the image itself.
For example:
public class MyTile
{
public Bitmap TileBitmap;
public Location CurrentPosition;
public int Value;
}
This way you can "move around" the tile class and read the value from the Value field instead of analyzing the image. You just draw whatever image the class hold to the position it's currently holding.
You tiles can be held in an array like:
private list<MyTile> MyTiles = new list<MyTile>();
Extend class as needed (and remember to Dispose those images when they are no longer needed).
if you really want to see if there is an image inside the image, you can check out this extension I wrote for another post (although in VB code):
Vb.Net Check If Image Existing In Another Image

How can I reduce memory usage when rendering a PDF page into a CGBitmapContext?

I'm using the code below to render a preview of a PDF page. However it is using loads of memory (2-3MB per page).
In the device logs I see:
<Error>: CGBitmapContextInfoCreate: unable to allocate 2851360 bytes for bitmap data
I really don't need the bitmap to be rendered in 8bits per color channel. How can I change the code to have it rendered in grayscale or less bits per channel?
I would also be fine with a solution where the bitmap is rendered in a maximum resolution of x/y and then the resulting image is zoomed to the requested size. The PDF will be rendered in detail afterwards by a CATiledLayer anyway.
Also according to Apple's documentation, CGBitmapContextCreate() returns NIL if the context cannot be created (because of memory). But in MonoTouch there is only the constructor to create a context, hence I'm unable to check if creation failed or not.
If I was able to, I could just skip the pretender image.
UIImage oBackgroundImage= null;
using(CGColorSpace oColorSpace = CGColorSpace.CreateDeviceRGB())
// This is the line that is causing the issue.
using(CGBitmapContext oContext = new CGBitmapContext(null, iWidth, iHeight, 8, iWidth * 4, oColorSpace, CGImageAlphaInfo.PremultipliedFirst))
{
// Fill background white.
oContext.SetFillColor(1f, 1f, 1f, 1f);
oContext.FillRect(oTargetRect);
// Calculate the rectangle to fit the page into.
RectangleF oCaptureRect = new RectangleF(0, 0, oTargetRect.Size.Width / fScaleToApply, oTargetRect.Size.Height / fScaleToApply);
// GetDrawingTransform() doesn't scale up, that's why why let it calculate the transformation for a smaller area
// if the current page is smaller than the area we have available (fScaleToApply > 1). Afterwards we scale up again.
CGAffineTransform oDrawingTransform = oPdfPage.GetDrawingTransform(CGPDFBox.Media, oCaptureRect, 0, true);
// Now scale context up to final size.
oContext.ScaleCTM(fScaleToApply, fScaleToApply);
// Concat the PDF transformation.
oContext.ConcatCTM(oDrawingTransform);
// Draw the page.
oContext.InterpolationQuality = CGInterpolationQuality.Medium;
oContext.SetRenderingIntent (CGColorRenderingIntent.Default);
oContext.DrawPDFPage(oPdfPage);
// Capture an image.
using(CGImage oImage = oContext.ToImage())
{
oBackgroundImage = UIImage.FromImage( oImage );
}
}
I really don't need the bitmap to be rendered in 8bits per color channel.
...
using(CGColorSpace oColorSpace = CGColorSpace.CreateDeviceRGB())
Have you tried to provide a different color space ?
where the bitmap is rendered in a maximum resolution of x/y
...
using(CGBitmapContext oContext = new CGBitmapContext(null, iWidth, iHeight, 8, iWidth * 4, oColorSpace, CGImageAlphaInfo.PremultipliedFirst))
You can control the bitmap size too and other parameters that directly affect how much memory is required by the bitmap.
Also according to Apple's documentation, CGBitmapContextCreate() returns NIL if the context cannot be created (because of memory).
If an invalid object (like null) is returned then the C# instance will have an Handle equals to IntPtr.Zero. This is true for any ObjC object, since init can return nil and a .NET constructor cannot return null.
Also according to Apple's documentation, CGBitmapContextCreate() returns NIL if the context cannot be created (because of memory). But in MonoTouch there is only the constructor to create a context, hence I'm unable to check if creation failed or not. If I was able to, I could just skip the pretender image.
This is actually easy:
CGBitmapContext context;
try {
context = new CGBitmapContext (...);
} catch (Exception ex) {
context = null;
}
if (context != null) {
using (context) {
...
}
}
or you could also just surround the entire using clause in an exception handler:
try {
using (var context = new CGBitmapContext (...)) {
...
}
} catch {
// we failed
oBackgroundImage = null;
}

C# Image.Clone Out of Memory Exception

Why am I getting an out of memory exception?
So this dies in C# on the first time through:
splitBitmaps.Add(neededImage.Clone(rectDimensions, neededImage.PixelFormat));
Where splitBitmaps is a List<BitMap> BUT this works in VB for at least 4 iterations:
arlSplitBitmaps.Add(Image.Clone(rectDimensions, Image.PixelFormat))
Where arlSplitBitmaps is a simple array list. (And yes I've tried arraylist in c#)
This is the fullsection:
for (Int32 splitIndex = 0; splitIndex <= numberOfResultingImages - 1; splitIndex++)
{
Rectangle rectDimensions;
if (splitIndex < numberOfResultingImages - 1)
{
rectDimensions = new Rectangle(splitImageWidth * splitIndex, 0,
splitImageWidth, splitImageHeight);
}
else
{
rectDimensions = new Rectangle(splitImageWidth * splitIndex, 0,
sourceImageWidth - (splitImageWidth * splitIndex), splitImageHeight);
}
splitBitmaps.Add(neededImage.Clone(rectDimensions, neededImage.PixelFormat));
}
neededImage is a Bitmap by the way.
I can't find any useful answers on the intarweb, especially not why it works just fine in VB.
Update:
I actually found a reason (sort of) for this working but forgot to post it. It has to do with converting the image to a bitmap instead of just trying to clone the raw image if I remember.
Clone() may also throw an Out of memory exception when the coordinates specified in the Rectangle are outside the bounds of the bitmap. It will not clip them automatically for you.
I found that I was using Image.Clone to crop a bitmap and the width took the crop outside the bounds of the original image. This causes an Out of Memory error. Seems a bit strange but can beworth knowing.
I got this too when I tried to use the Clone() method to change the pixel format of a bitmap. If memory serves, I was trying to convert a 24 bpp bitmap to an 8 bit indexed format, naively hoping that the Bitmap class would magically handle the palette creation and so on. Obviously not :-/
This is a reach, but I've often found that if pulling images directly from disk that it's better to copy them to a new bitmap and dispose of the disk-bound image. I've seen great improvement in memory consumption when doing so.
Dave M. is on the money too... make sure to dispose when finished.
I struggled to figure this out recently - the answers above are correct. Key to solving this issue is to ensure the rectangle is actually within the boundaries of the image. See example of how I solved this.
In a nutshell, checked to if the area that was being cloned was outside the area of the image.
int totalWidth = rect.Left + rect.Width; //think -the same as Right property
int allowableWidth = localImage.Width - rect.Left;
int finalWidth = 0;
if (totalWidth > allowableWidth){
finalWidth = allowableWidth;
} else {
finalWidth = totalWidth;
}
rect.Width = finalWidth;
int totalHeight = rect.Top + rect.Height; //think same as Bottom property
int allowableHeight = localImage.Height - rect.Top;
int finalHeight = 0;
if (totalHeight > allowableHeight){
finalHeight = allowableHeight;
} else {
finalHeight = totalHeight;
}
rect.Height = finalHeight;
cropped = ((Bitmap)localImage).Clone(rect, System.Drawing.Imaging.PixelFormat.DontCare);
Make sure that you're calling .Dispose() properly on your images, otherwise unmanaged resources won't be freed up. I wonder how many images are you actually creating here -- hundreds? Thousands?

Categories