Resize bitmap image - c#

I want to have smaller size at image saved.
How can I resize it?
I use this code for redering the image:
Size size = new Size(surface.Width, surface.Height);
surface.Measure(size);
surface.Arrange(new Rect(size));
// Create a render bitmap and push the surface to it
RenderTargetBitmap renderBitmap =
new RenderTargetBitmap(
(int)size.Width,
(int)size.Height, 96d, 96d,
PixelFormats.Default);
renderBitmap.Render(surface);
BmpBitmapEncoder encoder = new BmpBitmapEncoder();
// push the rendered bitmap to it
encoder.Frames.Add(BitmapFrame.Create(renderBitmap));
// save the data to the stream
encoder.Save(outStream);

public static Bitmap ResizeImage(Bitmap imgToResize, Size size)
{
try
{
Bitmap b = new Bitmap(size.Width, size.Height);
using (Graphics g = Graphics.FromImage((Image)b))
{
g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
g.DrawImage(imgToResize, 0, 0, size.Width, size.Height);
}
return b;
}
catch
{
Console.WriteLine("Bitmap could not be resized");
return imgToResize;
}
}

The shortest way to resize a Bitmap is to pass it to a Bitmap-constructor together with the desired size (or width and height):
bitmap = new Bitmap(bitmap, width, height);

Does your "surface" visual have scaling capability? You can wrap it in a Viewbox if not, then render the Viewbox at the size you want.
When you call Measure and Arrange on the surface, you should provide the size you want the bitmap to be.
To use the Viewbox, change your code to something like the following:
Viewbox viewbox = new Viewbox();
Size desiredSize = new Size(surface.Width / 2, surface.Height / 2);
viewbox.Child = surface;
viewbox.Measure(desiredSize);
viewbox.Arrange(new Rect(desiredSize));
RenderTargetBitmap renderBitmap =
new RenderTargetBitmap(
(int)desiredSize.Width,
(int)desiredSize.Height, 96d, 96d,
PixelFormats.Default);
renderBitmap.Render(viewbox);

Related

WPF .png overlay using DrawingVisual

I'm using this example - I found around here - to overlay two .png images and then save the result as a third .png image.
The input images are:
The output image should (in my dreams) be:
And instead I get this:
Here is the code:
public static void Test()
{
// Loads the images to tile (no need to specify PngBitmapDecoder, the correct decoder is automatically selected)
BitmapFrame frame1 = BitmapDecoder.Create(new Uri(#"D:\_tmp_\MaxMara\Test\Monoscope.png"), BitmapCreateOptions.None, BitmapCacheOption.OnLoad).Frames.First();
BitmapFrame frame2 = BitmapDecoder.Create(new Uri(#"D:\_tmp_\MaxMara\Test\OverlayFrame.png"), BitmapCreateOptions.None, BitmapCacheOption.OnLoad).Frames.First();
// Gets the size of the images (I assume each image has the same size)
int imageWidth = 1920;
int imageHeight = 1080;
// Draws the images into a DrawingVisual component
DrawingVisual drawingVisual = new DrawingVisual();
using (DrawingContext drawingContext = drawingVisual.RenderOpen())
{
drawingContext.DrawImage(frame1, new Rect(0, 0, imageWidth, imageHeight));
drawingContext.DrawImage(frame2, new Rect(0, 0, imageWidth, imageHeight));
}
// Converts the Visual (DrawingVisual) into a BitmapSource
RenderTargetBitmap bmp = new RenderTargetBitmap(imageWidth, imageHeight, 300, 300, PixelFormats.Pbgra32);
bmp.Render(drawingVisual);
// Creates a PngBitmapEncoder and adds the BitmapSource to the frames of the encoder
PngBitmapEncoder encoder = new PngBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(bmp));
// Saves the image into a file using the encoder
using (Stream stream = File.Create(#"D:\_tmp_\MaxMara\Test\Result.png"))
encoder.Save(stream);
}
Note: if i use 100 dpi as in:
RenderTargetBitmap bmp = new RenderTargetBitmap(imageWidth, imageHeight, 100, 100, PixelFormats.Pbgra32);
I get the correct result (meaning: the result I want).
I don't understand why. All images are 300 DPI
Can anyone shed some light on the topic please?
Thank you for your time
Orf
Do not use the PixelWidth and PixelHeight (i.e. your imageWidth and imageHeight values) of the bitmaps for drawing them into a DrawingContext.
Use their Width and Height values instead, because these give the bitmap size in device-independent units (1/96th inch per unit) as required for drawing.
using (var drawingContext = drawingVisual.RenderOpen())
{
drawingContext.DrawImage(frame1, new Rect(0, 0, frame1.Width, frame1.Height));
drawingContext.DrawImage(frame2, new Rect(0, 0, frame2.Width, frame2.Height));
}

Convert Canvas to ImageSource

im attempting to convert a canvas to a image source for use as an OpacityMask, I want to save it into memory rather than save it as a file, i'm having trouble though. Below is my code, I think i'm going about it wrong!
Really, I need to get the image information as a Base64String, so somewhere between that I need to convert the RenderTargetBitmap!
public BitmapSource ExportToPng(Uri path, Canvas surface)
{
BitmapEncoder encoder = new PngBitmapEncoder();
System.IO.MemoryStream myStream = new System.IO.MemoryStream();
// Save current canvas transform
Transform transform = surface.LayoutTransform;
// reset current transform (in case it is scaled or rotated)
surface.LayoutTransform = null;
// Get the size of canvas
System.Windows.Size size = new System.Windows.Size(surface.ActualWidth, surface.ActualHeight);
// Measure and arrange the surface
// VERY IMPORTANT
surface.Measure(size);
surface.Arrange(new Rect(size));
// Create a render bitmap and push the surface to it
RenderTargetBitmap renderBitmap =
new RenderTargetBitmap(
(int)size.Width,
(int)size.Height,
96d,
96d,
PixelFormats.Pbgra32);
renderBitmap.Render(surface);
// push the rendered bitmap to it
encoder.Frames.Add(BitmapFrame.Create(renderBitmap));
// save the data to the stream
encoder.Save(myStream);
// Restore previously saved layout
surface.LayoutTransform = transform;
var sr = new System.IO.StreamReader(myStream);
var myStr = sr.ReadToEnd();
var bytes = Convert.FromBase64String(myStr);
// Save to memory
/*Bitmap pg = new Bitmap("525, 350");
Graphics gr = Graphics.FromImage(pg);
gr.FillRectangle(new SolidBrush(System.Drawing.Color.FromArgb(255, 255, 255, 255)), 0, 0, (float)size.Width, (float)size.Height);
gr.DrawImage(System.Drawing.Bitmap.FromStream(myStream), 0, 0);*/
return BitmapFromBase64(myStr);
}
public static BitmapSource BitmapFromBase64(string base64String)
{
var bytes = Convert.FromBase64String(base64String);
using (var stream = new System.IO.MemoryStream(bytes))
{
return BitmapFrame.Create(stream,
BitmapCreateOptions.None, BitmapCacheOption.OnLoad);
}
}
Edit:
Just found another possible way, however this creates a DrawingVisual, I need to convert that to a ImageBrush
C#
// Create a DrawingVisual that contains a rectangle.
private DrawingVisual CreateDrawingVisualRectangle(List<Rectangle> rectangles)
{
DrawingVisual drawingVisual = new DrawingVisual();
// Retrieve the DrawingContext in order to create new drawing content.
DrawingContext drawingContext = drawingVisual.RenderOpen();
// Create a rectangle and draw it in the DrawingContext.
foreach(Rectangle x in rectangles)
{
Rect rect = new Rect(new System.Windows.Point(x.X, x.Y), new System.Windows.Size(x.Width, x.Height));
drawingContext.DrawRectangle(System.Windows.Media.Brushes.Black, (System.Windows.Media.Pen)null, rect);
}
// Persist the drawing content.
drawingContext.Close();
return drawingVisual;
}
A UIElement takes any Brush as OpacityMask. You can simply create a VisualBrush from you Canvas, since the base class of every UIElement is SWM.Visual.
Canvas c = new Canvas();
element.OpacityMask = new VisualBrush( c );
Regards, Snowball

Resizing Bitmap to increase image quality

I have an application that gets and saves a picture of an open window. Unfortunately, when the dimensions of the image and window match, the image quality is unacceptably low. I've increased the size the of the RenderTargetBitmap, which has increased the image quality, however, now the dimensions of the resulting image are too large for my purposes.
My question is this: Is there any way to resize the resulting RenderTargetBitmap to the original dimensions of the window? And is it possible to do this without a corresponding loss in image quality? Here's the code I have now.
public static RenderTargetBitmap GetReportImage(Grid view)
{
// Increased dimensions to increase image quality
Size size = new Size(view.ActualWidth * 2, view.ActualHeight * 2 );
// The dimensions that I want to convert back to.
_size = new Size(view.ActualWidth, view.ActualHeight);
if (size.IsEmpty)
return null;
RenderTargetBitmap result = new RenderTargetBitmap((int)size.Width, (int)size.Height, 96 , 96 , PixelFormats.Pbgra32);
DrawingVisual drawingVisual = new DrawingVisual();
using (DrawingContext context = drawingVisual.RenderOpen())
{
context.DrawRectangle(new VisualBrush(view), null, new Rect(new Point(), size));
context.Close();
}
result.Render(drawingVisual);
// Can I resize result to _size.Width and _size.Height?
return result;
}
Thanks in advance for any help.
To resize back you can do:
var resized = new TransformedBitmap(bmp,
new ScaleTransform(newWidth / bmp.PixelWidth, newHeight / bmp.PixelHeight));
You can also try to scale both dimensions and dpi, like this:
public static RenderTargetBitmap RenderBitmap(FrameworkElement visualToRender, double scale) {
RenderTargetBitmap bmp = new RenderTargetBitmap
(
(int) (scale*(visualToRender.ActualWidth + 1)),
(int) (scale*(visualToRender.ActualHeight + 1)),
scale*96,
scale*96,
PixelFormats.Pbgra32
);
bmp.Render(visualToRender);
return bmp;
}
Updated to answer your comment. So I suppose you need to save the result somewhere, probably to jpeg or png. Both RenderTargetBitmap and TransformedBitmap inherit from BitmapSource, so both you can encode to image like this:
var encoder = new PngBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(bmp));
using (Stream s = File.Create("test.png")) {
encoder.Save(s);
}
Here "bmp" can be both RenderTargetBitmap and TrasnformedBitmap

PNG size growth when using RenderTargetBitmap and PngBitmapEncoder

I'm using a method to merge some tile images into a single image. But if I apply it to four 30kb- PNG images, the resulting PNG image would be 500K (~4x more than what I expect)
This is some part of the code that I'm using (suggested by Cédric Bignon):
BitmapFrame frame1 = BitmapDecoder.Create(new Uri(path1), BitmapCreateOptions.None, BitmapCacheOption.OnLoad).Frames.First();
BitmapFrame frame2 = BitmapDecoder.Create(new Uri(path2), BitmapCreateOptions.None, BitmapCacheOption.OnLoad).Frames.First();
BitmapFrame frame3 = BitmapDecoder.Create(new Uri(path3), BitmapCreateOptions.None, BitmapCacheOption.OnLoad).Frames.First();
BitmapFrame frame4 = BitmapDecoder.Create(new Uri(path4), BitmapCreateOptions.None, BitmapCacheOption.OnLoad).Frames.First();
// Gets the size of the images (I assume each image has the same size)
int imageWidth = frame1.PixelWidth;
int imageHeight = frame1.PixelHeight;
// Draws the images into a DrawingVisual component
DrawingVisual drawingVisual = new DrawingVisual();
using (DrawingContext drawingContext = drawingVisual.RenderOpen())
{
drawingContext.DrawImage(frame1, new Rect(0, 0, imageWidth, imageHeight));
drawingContext.DrawImage(frame2, new Rect(imageWidth, 0, imageWidth, imageHeight));
drawingContext.DrawImage(frame3, new Rect(0, imageHeight, imageWidth, imageHeight));
drawingContext.DrawImage(frame4, new Rect(imageWidth, imageHeight, imageWidth, imageHeight));
}
// Converts the Visual (DrawingVisual) into a BitmapSource
RenderTargetBitmap bmp = new RenderTargetBitmap(imageWidth * 2, imageHeight * 2, 96, 96, PixelFormats.Pbgra32);
bmp.Render(drawingVisual);
// Creates a PngBitmapEncoder and adds the BitmapSource to the frames of the encoder
PngBitmapEncoder encoder = new PngBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(bmp));
// Saves the image into a file using the encoder
using (Stream stream = File.OpenWrite(pathTileImage))
encoder.Save(stream);
Anyone knows what is going on?
It's difficult to say without knowing the original images. PNG supports several color models and compressions. If, for example, the 4 original images have (different) palettes, and the composition will have to resort to a true colour format, and the total size will probably be way more than 4 times the original sizes.

RenderTargetBitmap is blurry

Hi i'm creating an image in memory from a Canvas using a PngBitmapEncoder.
public void CaptureGraphic()
{
Canvas canvas = new Canvas();
canvas.SnapsToDevicePixels = true;
canvas.Height = IMAGEHEIGHT;
canvas.Width = IMAGEWIDTH;
Draw(canvas);
canvas.Arrange(new Rect(0, 0, IMAGEWIDTH, IMAGEHEIGHT));
member.MemberImage = GetPngFromUIElement(canvas);
}
public static System.Drawing.Image GetPngFromUIElement(Canvas source)
{
int width = (int)source.ActualWidth;
int height = (int)source.ActualHeight;
if (width == 0)
width = (int)source.Width;
if (height == 0)
height = (int)source.Height;
RenderTargetBitmap bitmap = new RenderTargetBitmap(width, height, 96d, 96d, PixelFormats.Pbgra32);
bitmap.Render(source);
PngBitmapEncoder enc = new PngBitmapEncoder();
enc.Interlace = PngInterlaceOption.Off;
enc.Frames.Add(BitmapFrame.Create(bitmap));
System.IO.MemoryStream ms = new System.IO.MemoryStream();
enc.Save(ms);
System.Drawing.Image image = System.Drawing.Image.FromStream(ms);
ms.Flush();
ms.Dispose();
return image;
}
Then i'm sending the image to the printer using the GDI+ DrawImage() method. However the printed result is blurry.
I've tried to match the original canvas size to the printed size to avoid any scaling, equally i've tried to make the original considerably bigger so the scaled image retains the quality however the final printed image is always blurred.
Can anyone offer any suggestions/alternatives. I have a considerable GDI+ print routine already setup and moving to wpf documents is not an option just yet.
Thanks
You're capturing the bitmap at 96 DPI. Instead of using 96 in the constructor of the RenderTargetBitmap, try to match the DPI of your printer output. Alternatively, you could do the math and calculate the difference in width/height and rescale the image on the report accordingly (the result is the image on the report will appear smaller).
I had the same blurry result and have come up with the following piece of code, which applies the same idea with the offset, but using the Offset property on the DrawingVisual (since I was using DrawDrawing, which doesn't have an overload with the offset argument):
public static Image ToBitmap(Image source)
{
var dv = new DrawingVisual();
// Blur workaround
dv.Offset = new Vector(0.5, 0.5);
using (var dc = dv.RenderOpen())
dc.DrawDrawing(((DrawingImage)source.Source).Drawing);
var bmp = new RenderTargetBitmap((int)source.Width, (int)source.Height, 96, 96, PixelFormats.Pbgra32);
bmp.Render(dv);
var bitMapImage = new Image();
bitMapImage.Source = bmp;
bitMapImage.Width = source.Width;
bitMapImage.Height = source.Height;
return bitMapImage;
}
Think i've found the answer.
http://www.charlespetzold.com/blog/2007/12/High-Resolution-Printing-of-WPF-3D-Visuals.html
I just needed to scale up the image size along with the dpi and voila, massively increased file size!
I had the same problem. To avoid blurry text and lines, I had to draw everything with an offset of 0.5 in X and Y direction. E.g, an horizontal line could be
drawingContext.DrawLine(pen, new Point(10.5,10.5), new Point(100.5,10.5));
In my case, I was rendering to a RenderTargetBitmap in a different thread to improve the UI performance. The rendered bitmap is then frozen and drawn onto the UI using
drawingContext.DrawImage(bitmap, new Rect(0.5, 0, bitmap.Width, bitmap.Height));
Here, I needed an additional offset of 0.5, but (strangely) only in X direction, so that the rendered image did not look Blurry any more.

Categories