memory issue in jpeg decoding in wp7 - c#

I working on windows phone 7 camera app.I need to convert captured stream ,fix it rotation using exif helper and save as jpeg with quality,orientation,output size parameters.I followed exif rotation article to fix rotation. But the core problem is I need to decode the stream to jpeg first and then perform rotation fix as mentioned in order to save picture to media library.
I use following code:
private WriteableBitmap DecodeImage(Stream photo, int angle)
{
WriteableBitmap source = PictureDecoder.DecodeJpeg(photo);
photo.Seek(0, SeekOrigin.Begin);
System.GC.Collect();
UiDispatcher.BeginInvoke(() =>
{
MessageBox.Show(App.LogMemory("after decode"));
});
switch (angle)
{
case 90:
case 270:
return RotateBitmap(source, source.PixelHeight,
source.PixelWidth, angle);
case 180:
return RotateBitmap(source, source.PixelWidth,
source.PixelHeight, angle);
default:
return source;
}
return null;
}
In RotateBitMap method i have the rotation logic as specified in link but it creates a new
WritableBitmap object from source as follows:
WritablBitmap target = new WritableBitmap(soure.width,source.height); //source is the bitmap passed in argument.
The problem is
PictureDecoder.decodejpeg --consumes 30 mb for my camera captured stream
and creation of the new bitmap in rotate stream method is consuming 30 mb more.Resulting in hike of 60 mb application memory.
this is causing application crash due to memory in lower end(256mb) windows phone devices.
Why is decoding jpeg take 30mb and rotation of stream 30mb.(I tried to source and target bitmaps in rotate stream method to null and forced gc but were of no use.Applications hardly get 60mb on devices.How can i cope up with this requirement??
Any ideas..?How to optimize memory consumption in these cases???
Note: I need to take bitmap from rotatestream method as result as I need to use that bitmap to save as jpeg with output size,quality.

When you use JPEG decoding, you usually end up with a photo in it's full size.
In case it was taken with 8MP (roughly 8000000 pixels) camera, the calculation is this:
8000000 * 32bits = 256 000 000 bits for one picture in memory (which is roughly around 30MB)
(let me remind you that the HTC Titan II has a camera of 16MP, so if you used a photo in full size, it would take up roughly around 62MB in memory!!)
Obviously, to create just one WriteableBitmap, you need 30MB. In order to manipulate the photo in some way, you usually can't do it in place. In other words, you need to create a copy, and that's why it duplicates. Windows Phone has a way of preventing such a big memory consumption by automatically lowering the resolution of the loaded picture, but only when you use it with BitmapImage and SetSource method which takes JPEG stream as a parameter.
I wrote about it some time ago in my article How to open and work with large photos on Windows Phone and it's mentioned in the performance considerations for Windows Phone
So, basically, you should cut down the size of the loaded photo (do you need it in full size anyway?) and the BitmapImage class can easily do it for you automatically keeping it under 2000x2000 px which is roughly 15MB (MAX), so you should be fine if you don't want to be bothered with that.

Related

c# reduced image quality

I am writing a program similar to TeamViewer. But I have a problem that the screen resolution is too big. Below is a how I am generating the image from the screen.
byte[] ScreenShut()
{
Bitmap bmp = new Bitmap(Screen.PrimaryScreen.Bounds.Width,Screen.PrimaryScreen.Bounds.Height);
Graphics gr = Graphics.FromImage(bmp);
bmp.SetResolution(96.0F,96.0F);
gr.CopyFromScreen(0, 0, 0, 0, new Size(bmp.Width, bmp.Height));
MemoryStream ms = new MemoryStream();
bmp.Save(ms, ImageFormat.Png);
return ms.GetBuffer();
}
How can I reduce the quality of the incoming picture?
Save it as jpg 8 bit using this
public Bitmap(
int width,
int height,
PixelFormat format
)
I am writing a program similar to TeamViewer
I'll assume you are referring to the RDP/desktop sharing aspects.
Your problem is that you are not taking into account any prior frames so there is much erroneous data being transmitted. Generally, not a great deal of the screen changes from moment to moment. You need to compare prior frames to the current frame to determine what has changed and only send the deltas. Therefore your problem is essentially that of how to stream moving images or consecutive frames in a reasonably fast fashion.
The problem can be solved with any streaming video solution. Perhaps H.264?
You will find that video codecs don't just work on the current frame but also prior frames. Thus you can think of the screen being a slice moving through time of a much larger rectangular prism. So simply to solve it in a 2D fashion like trying to reduce the bit-depth; spacial resolution won't be sufficient.

Downsampling the Kinect 2's Color Camera input

I'm using the Kinect 2 for Windows and the C# version of the SDK. If needed writing a separate C++ lib or using C#'s unsafe regions for better performance is definitely an option
I'm trying to downsample the input of the Kinect's Color Camera as 1920x1080 pixels # 30 fps is a bit much. But I cannot find a built in function to reduce the resolution (very odd, am I missing something?)
My next idea was to store the data in a large byte[] and then selectively sample from that byte[] directly into another byte[] to reduce the amount of data.
int ratio = full.Length / smallBuffer.Length;
int bpp = (int)frameDescription.BytesPerPixel;
for (int i = 0; i < small.Length; i += bpp)
{
Array.Copy(full, i * ratio, small, i, bpp);
}
However, this method gives me a very funny result. The image has the correct width and height but the image is repeated along the horizontal axis multiple times. (Twice if I use half the original resoltion, thrice if I use a third, etc...).
How can I correctly downsample (subsample is actually a better description) the video?
My final solution was letting the encoder (x264VFW in my case do the downsampling, the real bottleneck turned out to be the copying of the array which was solved by giving the encoder a pointer to where the array was in managed memory (using a GCHandle).

C# "Parameter is not valid." creating new bitmap

if I try to create a bitmap bigger than 19000 px I get the error: Parameter is not valid.
How can I workaround this??
System.Drawing.Bitmap myimage= new System.Drawing.Bitmap(20000, 20000);
Keep in mind, that is a LOT of memory you are trying to allocate with that Bitmap.
Refer to http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/37684999-62c7-4c41-8167-745a2b486583/
.NET is likely refusing to create an image that uses up that much contiguous memory all at once.
Slightly harder to read, but this reference helps as well:
Each image in the system has the amount of memory defined by this formula:
bit-depth * width * height / 8
This means that an image 40800 pixels by 4050 will require over 660
megabytes of memory.
19000 pixels square, at 32bpp, would require 11552000000 bits (1.37 GB) to store the raster in memory. That's just the raw pixel data; any additional overhead inherent in the System.Drawing.Bitmap would add to that. Going up to 20k pixels square at the same color depth would require 1.5GB just for the raw pixel memory. In a single object, you are using 3/4 of the space reserved for the entire application in a 32-bit environment. A 64-bit environment has looser limits (usually), but you're still using 3/4 of the max size of a single object.
Why do you need such a colossal image size? Viewed at 1280x1024 res on a computer monitor, an image 19000 pixels on a side would be 14 screens wide by 18 screens tall. I can only imagine you're doing high-quality print graphics, in which case a 720dpi image would be a 26" square poster.
Set the PixelFormat when you new a bitmap, like:
new Bitmap(2000, 40000,PixelFormat.Format16bppRgb555)
and with the exact number above, it works for me. This may partly solve the problem.
I suspect you're hitting memory cap issues. However, there are many reasons a bitmap constructor can fail. The main reasons are GDI+ limits in CreateBitmap. System.Drawing.Bitmap, internally, uses the GDI native API when the bitmap is constructed.
That being said, a bitmap of that size is well over a GB of RAM, and it's likely that you're either hitting the scan line size limitation (64KB) or running out of memory.
Got this error when opening a TIF file. The problem was due to not able to open CMYK. Changed colorspace from RGB to CMYK and didn't get an error.
So I used taglib library to get image file size instead.
Code sample:
try
{
var image = new System.Drawing.Bitmap(filePath);
return string.Format("{0}px by {1}px", image.Width, image.Height);
}
catch (Exception)
{
try
{
TagLib.File file = TagLib.File.Create(filePath);
return string.Format("{0}px by {1}px", file.Properties.PhotoWidth, file.Properties.PhotoHeight);
}
catch (Exception)
{
return ("");
}
}

What quality level does Image.Save() use for jpeg files?

I just got a real surprise when I loaded a jpg file and turned around and saved it with a quality of 100 and the size was almost 4x the original. To further investigate I open and saved without explicitly setting the quality and the file size was exactly the same. I figured this was because nothing changed so it's just writing the exact same bits back to a file. To test this assumption I drew a big fat line diagonally across the image and saved again without setting quality (this time I expected the file to jump up because it would be "dirty") but it decreased ~10Kb!
At this point I really don't understand what is happening when I simply call Image.Save() w/out specifying a compression quality. How is the file size so close (after the image is modified) to the original size when no quality is set yet when I set quality to 100 (basically no compression) the file size is several times larger than the original?
I've read the documentation on Image.Save() and it's lacking any detail about what is happening behind the scenes. I've googled every which way I can think of but I can't find any additional information that would explain what I'm seeing. I have been working for 31 hours straight so maybe I'm missing something obvious ;0)
All of this has come about while I implement some library methods to save images to a database. I've overloaded our "SaveImage" method to allow explicitly setting a quality and during my testing I came across the odd (to me) results explained above. Any light you can shed will be appreciated.
Here is some code that will illustrate what I'm experiencing:
string filename = #"C:\temp\image testing\hh.jpg";
string destPath = #"C:\temp\image testing\";
using(Image image = Image.FromFile(filename))
{
ImageCodecInfo codecInfo = ImageUtils.GetEncoderInfo(ImageFormat.Jpeg);
// Set the quality
EncoderParameters parameters = new EncoderParameters(1);
// Quality: 10
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 10L);
image.Save(destPath + "10.jpg", codecInfo, parameters);
// Quality: 75
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 75L);
image.Save(destPath + "75.jpg", codecInfo, parameters);
// Quality: 100
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 100L);
image.Save(destPath + "100.jpg", codecInfo, parameters);
// default
image.Save(destPath + "default.jpg", ImageFormat.Jpeg);
// Big line across image
using (Graphics g = Graphics.FromImage(image))
{
using(Pen pen = new Pen(Color.Red, 50F))
{
g.DrawLine(pen, 0, 0, image.Width, image.Height);
}
}
image.Save(destPath + "big red line.jpg", ImageFormat.Jpeg);
}
public static ImageCodecInfo GetEncoderInfo(ImageFormat format)
{
return ImageCodecInfo.GetImageEncoders().ToList().Find(delegate(ImageCodecInfo codec)
{
return codec.FormatID == format.Guid;
});
}
Using reflector, it turns out Image.Save() boils down to the GDI+ function GdipSaveImageToFile, with the encoderParams NULL. So I think the question is what the JPEG encoder does when it gets a null encoderParams. 75% has been suggested here, but I can't find any solid reference.
EDIT You could probably find out for yourself by running your program above for quality values of 1..100 and comparing them with the jpg saved with the default quality (using, say, fc.exe /B)
IIRC, it is 75%, but I dont recall where I read this.
I don't know much about the Image.Save method, but I can tell you that adding that fat line would logicly reduce the size of the jpg image. This is due to the way a jpg is saved (and encoded).
The thick black line makes for a very simple and smaller encoding (If I remember correctly this is relevent mostly after the Discrete cosine transform), so the modified image can be stored using less data (bytes).
jpg encoding steps
Regarding the changes in size (without the added line), I'm not sure which image you reopened and resaved
To further investigate I open and saved without explicitly setting the quality and the file size was exactly the same
If you opened the old (original normal size) image and resaved it, then maybe the default compression and the original image compression are the same.
If you opened the new (4X larger) image and resaved it, then maybe the default compression for the save method is derived from the image (as it was when loaded).
Again, I don't know the save method, so I'm just throwing ideas (maybe they'll give you a lead).
When you save an image as a JPEG file with a quality level of <100%, you are introducing artefacts into the saved-off image, which are a side-effect of the compression process. This is why re-saving the image at 100% is actually increasing the size of your file beyond the original - ironically there's more information present in the bitmap.
This is also why you should always attempt to save in a non-lossy format (such as PNG) if you intend to do any edits to your file afterwards, otherwise you'll be affecting the quality of the output through multiple lossy transformations.

Image manipulation in C#

I am loading a JPG image from hard disk into a byte[]. Is there a way to resize the image (reduce resolution) without the need to put it in a Bitmap object?
thanks
There are always ways but whether they are better... a JPG is a compressed image format which means that to do any image manipulation on it you need something to interpret that data. The bimap object will do this for you but if you want to go another route you'll need to look into understanding the jpeg spec, creating some kind of parser, etc. It might be that there are shortcuts that can be used without needing to do full intepretation of the original jpg but I think it would be a bad idea.
Oh, and not to forget there are different file formats for JPG apparently (JFIF and EXIF) that you will ened to understand...
I'd think very hard before avoiding objects that are specifically designed for the sort of thing you are trying to do.
A .jpeg file is just a bag o' bytes without a JPEG decoder. There's one built into the Bitmap class, it does a fine job decoding .jpeg files. The result is a Bitmap object, you can't get around that.
And it supports resizing through the Graphics class as well as the Bitmap(Image, Size) constructor. But yes, making a .jpeg image smaller often produces a file that's larger. That's an unavoidable side-effect of Graphics.Interpolation mode. It tries to improve the appearance of the reduced image by running the pixels through a filter. The Bicubic filter does an excellent job of it.
Looks great to the human eye, doesn't look so great to the JPEG encoder. The filter produces interpolated pixel colors, designed to avoid making image details disappear completely when the size is reduced. These blended pixel values however make it harder on the encoder to compress the image, thus producing a larger file.
You can tinker with Graphics.InterpolationMode and select a lower quality filter. Produces a poorer image, but easier to compress. I doubt you'll appreciate the result though.
Here's what I'm doing.
And no, I don't think you can resize an image without first processing it in-memory (i.e. in a Bitmap of some kind).
Decent quality resizing involves using an interpolation/extrapolation algorithm; it can't just be "pick out every n pixels", unless you can settle with nearest neighbor.
Here's some explanation: http://www.cambridgeincolour.com/tutorials/image-interpolation.htm
protected virtual byte[] Resize(byte[] data, int width, int height) {
var inStream = new MemoryStream(data);
var outStream = new MemoryStream();
var bmp = System.Drawing.Bitmap.FromStream(inStream);
var th = bmp.GetThumbnailImage(width, height, null, IntPtr.Zero);
th.Save(outStream, System.Drawing.Imaging.ImageFormat.Jpeg);
return outStream.ToArray(); }

Categories