I am taking an image, decoding the byte array that is generated by TakePicture method and then rotating the bitmap 270 degrees. The problem is that i seem to run out of memory, and i do not know how to solve it. Here is the code:
Bitmap image = BitmapFactory.DecodeByteArray (data, 0, data.Length);
data = null;
Bitmap rotatedBitmap = Bitmap.CreateBitmap (image, 0, 0, image.Width,
image.Height, matrix, true);
Please have a look at Load Large Bitmaps Efficiently from official Xamarin documentation, which explains how you can load large images into memory without the application throwing an OutOfMemoryException by loading a smaller subsampled version in memory.
Read Bitmap Dimensions and Type
async Task<BitmapFactory.Options> GetBitmapOptionsOfImageAsync()
{
BitmapFactory.Options options = new BitmapFactory.Options
{
InJustDecodeBounds = true
};
// The result will be null because InJustDecodeBounds == true.
Bitmap result= await BitmapFactory.DecodeResourceAsync(Resources, Resource.Drawable.someImage, options);
int imageHeight = options.OutHeight;
int imageWidth = options.OutWidth;
_originalDimensions.Text = string.Format("Original Size= {0}x{1}", imageWidth, imageHeight);
return options;
}
Load a Scaled Down Version into Memory
public static int CalculateInSampleSize(BitmapFactory.Options options, int reqWidth, int reqHeight)
{
// Raw height and width of image
float height = options.OutHeight;
float width = options.OutWidth;
double inSampleSize = 1D;
if (height > reqHeight || width > reqWidth)
{
int halfHeight = (int)(height / 2);
int halfWidth = (int)(width / 2);
// Calculate a inSampleSize that is a power of 2 - the decoder will use a value that is a power of two anyway.
while ((halfHeight / inSampleSize) > reqHeight && (halfWidth / inSampleSize) > reqWidth)
{
inSampleSize *= 2;
}
}
return (int)inSampleSize;
}
Load the Image as Async
public async Task<Bitmap> LoadScaledDownBitmapForDisplayAsync(Resources res, BitmapFactory.Options options, int reqWidth, int reqHeight)
{
// Calculate inSampleSize
options.InSampleSize = CalculateInSampleSize(options, reqWidth, reqHeight);
// Decode bitmap with inSampleSize set
options.InJustDecodeBounds = false;
return await BitmapFactory.DecodeResourceAsync(res, Resource.Drawable.someImage, options);
}
And call it to load Image, say in OnCreate
protected async override void OnCreate(Bundle bundle)
{
base.OnCreate(bundle);
SetContentView(Resource.Layout.Main);
_imageView = FindViewById<ImageView>(Resource.Id.resized_imageview);
BitmapFactory.Options options = await GetBitmapOptionsOfImageAsync();
Bitmap bitmapToDisplay = await LoadScaledDownBitmapForDisplayAsync (Resources,
options,
150, //for 150 X 150 resolution
150);
_imageView.SetImageBitmap(bitmapToDisplay);
}
Hope it helps.
Related
We are using a camera that acquires up to 60 frames per second, providing Bitmaps for us to use in our codebase.
As our wpf-app requires, these bitmaps are scaled based on a scaling factor; That scaling-process is by far the most limiting factor when it comes to actually displaying 60 fps. I am aware of new Bitmap(Bitmap source, int width, int height) which is obviously the simplest way to resize a Bitmap;
Nevertheless, I am trying to implement a "manual" approach using BitmapData and pointers. I have come up with the following:
public static Bitmap /*myMoBetta*/ResizeBitmap(this Bitmap bmp, double scaleFactor)
{
int desiredWidth = (int)(bmp.Width * scaleFactor),
desiredHeight = (int)(bmp.Height * scaleFactor);
var scaled = new Bitmap(desiredWidth, desiredHeight, bmp.PixelFormat);
int formatSize = (int)Math.Ceiling(Image.GetPixelFormatSize(bmp.PixelFormat)/8.0);
BitmapData bmpData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadOnly, bmp.PixelFormat);
BitmapData scaledData = scaled.LockBits(new Rectangle(0, 0, scaled.Width, scaled.Height), ImageLockMode.WriteOnly, scaled.PixelFormat);
unsafe
{
var srcPtr = (byte*)bmpData.Scan0.ToPointer();
var destPtr = (byte*)scaledData.Scan0.ToPointer();
int scaledDataSize = scaledData.Stride * scaledData.Height;
int nextPixel = (int)(1 / scaleFactor)*formatSize;
Parallel.For(0, scaledDataSize - formatSize,
i =>
{
for (int j = 0; j < formatSize; j++)
{
destPtr[i + j] = srcPtr[i * nextPixel + j];
}
});
}
bmp.UnlockBits(bmpData);
bmp.Dispose();
scaled.UnlockBits(scaledData);
return scaled;
}
Given scalingFactor < 1.
Actually using this algorithm does not seem to work, though. How are the bits of each pixel arranged in memory, exactly? My guess was that calling Image.GetPixelFormatSize() and deviding its result by 8 returns the number of bytes per pixel; But continuing to copy only formatSize amout of bytes every 1 / scaleFactor * formatSize byte results in a corrupted image.
What am I missing?
After some more research I bumped into OpenCV that has it's own .NET implementation with Emgu.CV, containing relevant methods for faster resizing.
My ResizeBitmap()-function has shrinked significantly:
public static Bitmap ResizeBitmap(this Bitmap bmp, int width, int height)
{
var desiredSize = new Size(width, height);
var src = new Emgu.CV.Image<Rgb, byte>(bmp);
var dest = new Emgu.CV.Image<Rgb, byte>(desiredSize);
Emgu.CV.CvInvoke.Resize(src, dest, desiredSize);
bmp.Dispose();
src.Dispose();
return dest.ToBitmap();
}
I have not tested performance thouroughly, but while debugging, this implementation reduced executiontime from 22ms with new Bitmap(source, width, height) to about 7ms.
I am working on xamarin.forms. I have to select images from gallery and then resize them and then upload them on server. But I don't know how I can resize selected image in a given particular size?
Please update me how I can do this?
This can be used with a stream (if you're using the Media Plugin https://github.com/jamesmontemagno/MediaPlugin) or standard byte arrays.
// If you already have the byte[]
byte[] resizedImage = await CrossImageResizer.Current.ResizeImageWithAspectRatioAsync(originalImageBytes, 500, 1000);
// If you have a stream, such as:
// var file = await CrossMedia.Current.PickPhotoAsync(options);
// var originalImageStream = file.GetStream();
byte[] resizedImage = await CrossImageResizer.Current.ResizeImageWithAspectRatioAsync(originalImageStream, 500, 1000);
I tried use CrossImageResizer.Current... but I did not find it in the Media Plugin. Instead I found an option called MaxWidthHeight, that worked only if you also add PhotoSize = PhotoSize.MaxWidthHeight option.
For Example :
var file = await CrossMedia.Current.PickPhotoAsync(new PickMediaOptions() { PhotoSize = PhotoSize.MaxWidthHeight, MaxWidthHeight = 600 });
var file = await CrossMedia.Current.TakePhotoAsync(new StoreCameraMediaOptions { PhotoSize = PhotoSize.MaxWidthHeight, MaxWidthHeight = 600 });
Sadly enough there isn't a good cross-platform image resizer (that I've found at the time of this post). Image processing wasn't really designed to take place in a cross-platform environment for iOS and Android. It's much faster and cleaner to perform this on each platform using platform-specific code. You can do this using dependency injection and the DependencyService (or any other service or IOC).
AdamP gives a great response on how to do this Platform Specific Image Resizing
Here is the code taken from the link above.
iOS
public class MediaService : IMediaService
{
public byte[] ResizeImage(byte[] imageData, float width, float height)
{
UIImage originalImage = ImageFromByteArray(imageData);
var originalHeight = originalImage.Size.Height;
var originalWidth = originalImage.Size.Width;
nfloat newHeight = 0;
nfloat newWidth = 0;
if (originalHeight > originalWidth)
{
newHeight = height;
nfloat ratio = originalHeight / height;
newWidth = originalWidth / ratio;
}
else
{
newWidth = width;
nfloat ratio = originalWidth / width;
newHeight = originalHeight / ratio;
}
width = (float)newWidth;
height = (float)newHeight;
UIGraphics.BeginImageContext(new SizeF(width, height));
originalImage.Draw(new RectangleF(0, 0, width, height));
var resizedImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
var bytesImagen = resizedImage.AsJPEG().ToArray();
resizedImage.Dispose();
return bytesImagen;
}
}
Android
public class MediaService : IMediaService
{
public byte[] ResizeImage(byte[] imageData, float width, float height)
{
// Load the bitmap
BitmapFactory.Options options = new BitmapFactory.Options();// Create object of bitmapfactory's option method for further option use
options.InPurgeable = true; // inPurgeable is used to free up memory while required
Bitmap originalImage = BitmapFactory.DecodeByteArray(imageData, 0, imageData.Length, options);
float newHeight = 0;
float newWidth = 0;
var originalHeight = originalImage.Height;
var originalWidth = originalImage.Width;
if (originalHeight > originalWidth)
{
newHeight = height;
float ratio = originalHeight / height;
newWidth = originalWidth / ratio;
}
else
{
newWidth = width;
float ratio = originalWidth / width;
newHeight = originalHeight / ratio;
}
Bitmap resizedImage = Bitmap.CreateScaledBitmap(originalImage, (int)newWidth, (int)newHeight, true);
originalImage.Recycle();
using (MemoryStream ms = new MemoryStream())
{
resizedImage.Compress(Bitmap.CompressFormat.Png, 100, ms);
resizedImage.Recycle();
return ms.ToArray();
}
}
WinPhone
public class MediaService : IMediaService
{
private MediaImplementation mi = new MediaImplementation();
public byte[] ResizeImage(byte[] imageData, float width, float height)
{
byte[] resizedData;
using (MemoryStream streamIn = new MemoryStream(imageData))
{
WriteableBitmap bitmap = PictureDecoder.DecodeJpeg(streamIn, (int)width, (int)height);
float Height = 0;
float Width = 0;
float originalHeight = bitmap.PixelHeight;
float originalWidth = bitmap.PixelWidth;
if (originalHeight > originalWidth)
{
Height = height;
float ratio = originalHeight / height;
Width = originalWidth / ratio;
}
else
{
Width = width;
float ratio = originalWidth / width;
Height = originalHeight / ratio;
}
using (MemoryStream streamOut = new MemoryStream())
{
bitmap.SaveJpeg(streamOut, (int)Width, (int)Height, 0, 100);
resizedData = streamOut.ToArray();
}
}
return resizedData;
}
}
EDIT: If you are already using FFImageLoading in your project then you can just use that for your platform.
https://github.com/luberda-molinet/FFImageLoading
I fixed in my project, this was the best way for me .
when take photo or get image from gallery you can change size with properties
var file = await CrossMedia.Current.TakePhotoAsync(new StoreCameraMediaOptions
{
PhotoSize = PhotoSize.Custom,
CustomPhotoSize = 90 //Resize to 90% of original
});
for more information: https://github.com/jamesmontemagno/MediaPlugin
I am new to Xamarin android and for below code am getting OutOfMemoryException. Here profilebitMap is a Bitmap and mProfileImage is an ImageView. I have tried this with using block and dispose/recycle methods also but still getting the same error after multiple returns to the image page. Please help me in this.
if (!string.IsNullOrEmpty(profile.Image))
{
string[] fileExtension = profile.Image.Split('/');
string _imagePath = System.IO.Path.Combine(_documentsPath.ToString(), profile.ID + fileExtension[fileExtension.Length - 1]);
if (File.Exists(_imagePath))
{
profilebitMap = await BitmapFactory.DecodeFileAsync(_imagePath);
//profilebitMap = Util.Base64ToBitmap(appPreferencce.getAccessKey(profile.Image));
}
else
{
profilebitMap = Util.GetImageBitmapFromUrl(profile.Image, appPreferencce.getAccessKey("username"), appPreferencce.getAccessKey("password"));
using (var stream = new MemoryStream())
{
profilebitMap.Compress(Bitmap.CompressFormat.Png, 100, stream);
var imageBytes = stream.ToArray();
File.WriteAllBytes(_imagePath, imageBytes);
}
}
}
else
{
profilebitMap = BitmapFactory.DecodeResource(this.ApplicationContext.Resources, Resource.Drawable.dummyuser);
}
CircularDrawable d = new CircularDrawable(profilebitMap,
(int)Util.ConvertDpToPx(ApplicationContext, margin),
Util.ConvertDpToPx(ApplicationContext, strokeWidth),
new Android.Graphics.Color(ContextCompat.GetColor(this, Resource.Color.normal3)));
mProfileImage.SetBackgroundDrawable(d);
You don't have to reinvent the wheel, you can use libraries that load you pictures in few lines. You should take a look to the awesome library Picasso.
Bye
To avoid the OutOfMemoryException we should use BitmapFactory.Options class.
BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeResource(getResources(), R.id.myimage, options);
int imageHeight = options.outHeight;
int imageWidth = options.outWidth;
String imageType = options.outMimeType;
Setting the inJustDecodeBounds property to true while decoding avoids memory allocation, returning null for the bitmap object but setting outWidth, outHeight and outMimeType.
Now that the image dimensions are known, they can be used to decide if the full image should be loaded into memory or if a subsampled version should be loaded instead.
For example, an image with resolution 2048x1536 that is decoded with an inSampleSize of 4 produces a bitmap of approximately 512x384. Loading this into memory uses 0.75MB rather than 12MB for the full image.
public static int calculateInSampleSize(
BitmapFactory.Options options, int reqWidth, int reqHeight) {
// Raw height and width of image
final int height = options.outHeight;
final int width = options.outWidth;
int inSampleSize = 1;
if (height > reqHeight || width > reqWidth) {
final int halfHeight = height / 2;
final int halfWidth = width / 2;
// Calculate the largest inSampleSize value that is a power of 2 and keeps both
// height and width larger than the requested height and width.
while ((halfHeight / inSampleSize) >= reqHeight
&& (halfWidth / inSampleSize) >= reqWidth) {
inSampleSize *= 2;
}
}
return inSampleSize;
}
Please follow this url https://developer.android.com/training/displaying-bitmaps/load-bitmap.html
I'm trying to compare two Images via their byte content. However, they do not match.
Both images were generated from the same source image, using the same method with the same parameters. I am guessing that something in the image generation or the way I convert to a byte array is not deterministic. Does anyone know where the non-deterministic behavior is occurring and whether or not I can readily force deterministic behavior for my unit testing?
This method within my test class converts the image to a byte array - is image.Save deterministic? Is memStream.ToArray() deterministic?
private static byte[] ImageToByteArray(Image image)
{
byte[] actualBytes;
using (MemoryStream memStream = new MemoryStream())
{
image.Save(memStream, ImageFormat.Bmp);
actualBytes = memStream.ToArray();
}
return actualBytes;
}
Here is the unit test, which is failing - TestImageLandscapeDesertResized_300_300 was generated from TestImageLandscapeDesert using the ImageHelper.ResizeImage(testImageLandscape, 300, 300) and then saving to a file before loading into the project's resource file. If all calls within my code were deterministic based upon my input parameters, this test should pass.
public void ResizeImage_Landscape_SmallerLandscape()
{
Image testImageLandscape = Resources.TestImageLandscapeDesert;
Image expectedImage = Resources.TestImageLandscapeDesertResized_300_300;
byte[] expectedBytes = ImageToByteArray(expectedImage);
byte[] actualBytes;
using (Image resizedImage = ImageHelper.ResizeImage(testImageLandscape, 300, 300))
{
actualBytes = ImageToByteArray(resizedImage);
}
Assert.IsTrue(expectedBytes.SequenceEqual(actualBytes));
}
The method under test - this method will shrink the input image so its height and width are less than maxHeight and maxWidth, retaining the existing aspect ratio. Some of the graphics calls may be non-deterministic, I cannot tell from Microsoft's limited documentation.
public static Image ResizeImage(Image image, int maxWidth, int maxHeight)
{
decimal width = image.Width;
decimal height = image.Height;
decimal newWidth;
decimal newHeight;
//Calculate new width and height
if (width > maxWidth || height > maxHeight)
{
// need to preserve the original aspect ratio
decimal originalAspectRatio = width / height;
decimal widthReductionFactor = maxWidth / width;
decimal heightReductionFactor = maxHeight / height;
if (widthReductionFactor < heightReductionFactor)
{
newWidth = maxWidth;
newHeight = newWidth / originalAspectRatio;
}
else
{
newHeight = maxHeight;
newWidth = newHeight * originalAspectRatio;
}
}
else
//Return a copy of the image if smaller than allowed width and height
return new Bitmap(image);
//Resize image
Bitmap bitmap = new Bitmap((int)newWidth, (int)newHeight, PixelFormat.Format48bppRgb);
Graphics graphic = Graphics.FromImage(bitmap);
graphic.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphic.DrawImage(image, 0, 0, (int)newWidth, (int)newHeight);
graphic.Dispose();
return bitmap;
}
This eventually worked. I don't know whether or not this is a good idea for unit tests, but with the GDI+ logic being non-deterministic (or my logic interfacing with it), this seems the best approach.
I use MS Fakes Shimming feature to Shim the dependent calls and verify expected values are passed to the called methods. Then I call the native methods to get the required functionality for the rest of the method under test. And finally I verify a few attributes of the returned image.
Still, I would prefer to perform a straight comparison of expected output against actual output...
[TestMethod]
[TestCategory("ImageHelper")]
[TestCategory("ResizeImage")]
public void ResizeImage_LandscapeTooLarge_SmallerLandscape()
{
Image testImageLandscape = Resources.TestImageLandscapeDesert;
const int HEIGHT = 300;
const int WIDTH = 300;
const int EXPECTED_WIDTH = WIDTH;
const int EXPECTED_HEIGHT = (int)(EXPECTED_WIDTH / (1024m / 768m));
const PixelFormat EXPECTED_FORMAT = PixelFormat.Format48bppRgb;
bool calledBitMapConstructor = false;
bool calledGraphicsFromImage = false;
bool calledGraphicsDrawImage = false;
using (ShimsContext.Create())
{
ShimBitmap.ConstructorInt32Int32PixelFormat = (instance, w, h, f) => {
calledBitMapConstructor = true;
Assert.AreEqual(EXPECTED_WIDTH, w);
Assert.AreEqual(EXPECTED_HEIGHT, h);
Assert.AreEqual(EXPECTED_FORMAT, f);
ShimsContext.ExecuteWithoutShims(() => {
ConstructorInfo constructor = typeof(Bitmap).GetConstructor(new[] { typeof(int), typeof(int), typeof(PixelFormat) });
Assert.IsNotNull(constructor);
constructor.Invoke(instance, new object[] { w, h, f });
});
};
ShimGraphics.FromImageImage = i => {
calledGraphicsFromImage = true;
Assert.IsNotNull(i);
return ShimsContext.ExecuteWithoutShims(() => Graphics.FromImage(i));
};
ShimGraphics.AllInstances.DrawImageImageInt32Int32Int32Int32 = (instance, i, x, y, w, h) => {
calledGraphicsDrawImage = true;
Assert.IsNotNull(i);
Assert.AreEqual(0, x);
Assert.AreEqual(0, y);
Assert.AreEqual(EXPECTED_WIDTH, w);
Assert.AreEqual(EXPECTED_HEIGHT, h);
ShimsContext.ExecuteWithoutShims(() => instance.DrawImage(i, x, y, w, h));
};
using (Image resizedImage = ImageHelper.ResizeImage(testImageLandscape, HEIGHT, WIDTH))
{
Assert.IsNotNull(resizedImage);
Assert.AreEqual(EXPECTED_WIDTH, resizedImage.Size.Width);
Assert.AreEqual(EXPECTED_HEIGHT, resizedImage.Size.Height);
Assert.AreEqual(EXPECTED_FORMAT, resizedImage.PixelFormat);
}
}
Assert.IsTrue(calledBitMapConstructor);
Assert.IsTrue(calledGraphicsFromImage);
Assert.IsTrue(calledGraphicsDrawImage);
}
A bit late to the table with this, but adding this in case it helps anyone out. In my unit tests, this reliably compared images I had dynamically generated using GDI+.
private static bool CompareImages(string source, string expected)
{
var image1 = new Bitmap($".\\{source}");
var image2 = new Bitmap($".\\Expected\\{expected}");
var converter = new ImageConverter();
var image1Bytes = (byte[])converter.ConvertTo(image1, typeof(byte[]));
var image2Bytes = (byte[])converter.ConvertTo(image2, typeof(byte[]));
// ReSharper disable AssignNullToNotNullAttribute
var same = image1Bytes.SequenceEqual(image2Bytes);
// ReSharper enable AssignNullToNotNullAttribute
return same;
}
I tried this:
string str = System.IO.Path.GetFileName(txtImage.Text);
string pth = System.IO.Directory.GetCurrentDirectory() + "\\Subject";
string fullpath = pth + "\\" + str;
Image NewImage = clsImage.ResizeImage(fullpath, 130, 140, true);
NewImage.Save(fullpath, ImageFormat.Jpeg);
public static Image ResizeImage(string file, int width, int height, bool onlyResizeIfWider)
{
if (File.Exists(file) == false)
return null;
try
{
using (Image image = Image.FromFile(file))
{
// Prevent using images internal thumbnail
image.RotateFlip(RotateFlipType.Rotate180FlipNone);
image.RotateFlip(RotateFlipType.Rotate180FlipNone);
if (onlyResizeIfWider == true)
{
if (image.Width <= width)
{
width = image.Width;
}
}
int newHeight = image.Height * width / image.Width;
if (newHeight > height)
{
// Resize with height instead
width = image.Width * height / image.Height;
newHeight = height;
}
Image NewImage = image.GetThumbnailImage(width, newHeight, null, IntPtr.Zero);
return NewImage;
}
}
catch (Exception )
{
return null;
}
}
Running the above code, I get an image 4-5 KB in size with very poor image quality.
The original image file is no more than 1.5 MB large. How can I improve the image quality of the results?
I think you should use Image Resizer, its free and resizing an image is very easy using it.
var settings = new ResizeSettings {
MaxWidth = thumbnailSize,
MaxHeight = thumbnailSize,
Format = "jpg"
};
settings.Add("quality", quality.ToString());
ImageBuilder.Current.Build(inStream, outStream, settings);
resized = outStream.ToArray();
You can also install it using Nuget package manager.
PM> Install-Package ImageResizer