Rasterize wpf textblock into a bitmap via drawingcontext - c#

My program is sort of copy version of MS paint and Pickpick.
and one of features is rasterizing the selected object such as textblock or shape.
Regarding the selectable object, in order to resize and move with adorner,
it has 1 ContentControl which comprise 1 textblock + 1 shape.
ContentControl (able to resize, rotate, move)
└─> Textblock (bold, italic, V-align, H-align, word wrap...)
└─> Shape (can be a triangle, rectangle etc...)
It was not hard to convert to draw the shape with drawing context instead of render at Canvas.
var SH = CC.GetShape();
var TB = CC.GetTextBlock();
var visual = new DrawingVisual();
Geometry geo = null;
System.Windows.Media.Pen pen = null;
System.Windows.Media.Brush brush = null;
if (SH != null)
{
geo = SH.RenderedGeometry; // shape to geo
if (geo == null)
return;
pen = new System.Windows.Media.Pen(SH.Stroke, SH.StrokeThickness);
brush = SH.Fill;
}
using (var dc = visual.RenderOpen())
{
// Draw the background first
dc.DrawImage(first, new Rect(0, 0, first.Width, first.Height));
dc.PushTransform(new TranslateTransform(left, top));
// Draw the shape
if (SH != null && geo != null)
dc.DrawGeometry(brush, pen, geo);
}
But while drawing Textblock with drawing context,
I've referred below link to calculate the position of Textblock
Vertical alignment with DrawingContext.DrawText
but the problem is when the Textblock has multiline or word wrapped.
screenshot of my program
if (TB.Text.Equals(string.Empty) == false)
{
var typeface = new Typeface(CC.txtSetting.fontFamily,
CC.txtSetting.fontStyle,
CC.txtSetting.fontWeight,
FontStretches.Normal);
var formattedText = new FormattedText(TB.Text
, CultureInfo.CurrentCulture
, FlowDirection.LeftToRight
, typeface
, CC.txtSetting.fontSize
, new SolidColorBrush(CC.txtSetting.fontColor));
double centerX = CC.ActualWidth / 2;
double centerY = CC.ActualHeight / 2;
double txtPositionX = 0.0f;
double txtPositionY = 0.0f;
if (TB.TextAlignment == TextAlignment.Left)
{
txtPositionX = 1.0f;
}
else if (TB.TextAlignment == TextAlignment.Center)
{
txtPositionX = centerX - formattedText.WidthIncludingTrailingWhitespace / 2;
}
else if (TB.TextAlignment == TextAlignment.Right)
{
txtPositionX = CC.Width -
formattedText.WidthIncludingTrailingWhitespace - 1.0f;
}
if (TB.VerticalAlignment == VerticalAlignment.Top)
{
txtPositionY = 1.0f;
}
else if (TB.VerticalAlignment == VerticalAlignment.Center)
{
txtPositionY = centerY - formattedText.Height / 2;
}
else if (TB.VerticalAlignment == VerticalAlignment.Bottom)
{
txtPositionY = CC.Height - formattedText.Height - 1.0f;
}
var ptLocation = new System.Windows.Point(txtPositionX, txtPositionY);
dc.DrawText(formattedText, ptLocation);
}
Additionally, the textblock is wrapped by ContentControl so depending on user change the property of textblock, it will vary so much.
I guess it seems not possible to convert every variable.
So, I'm thinking alternative ways to draw.
Draw with GDI+ instead of drawing with drawing context. (still uncertain)
Use drawing context while the user is editing the text. (so it'll be the same before rasterizing and vice-versa)
Any way to directly convert/capture the Textblock into an image or Geometry? (it would be the best way if it's possible.)
For example, to get a shader effect applied image source, I did like this. so.. probably there's the way.
How can I get the object of effect-applied source
You can also refer to this program from http://ngwin.com/picpick
screenshot of picpick
Any better ideas? Thank you in advance.

I made it!
I could capture the particular control with RenderTargetBimap. Since ContentControl is a part of Visual Element.
CustomControl is inherited control from ContentControl.
public static BitmapSource ControlToBitmap(CustomControl control)
{
int W = (int)control.ActualWidth;
int H = (int)control.ActualHeight;
RenderTargetBitmap renderBitmap = new RenderTargetBitmap(
W, H,
96d, 96d, PixelFormats.Pbgra32);
// needed otherwise the image output is black
control.Measure(new System.Windows.Size(W, H));
control.Arrange(new Rect(new System.Windows.Size(W, H)));
renderBitmap.Render(control);
var BS = RenderTargetBitmapToBitmap(renderBitmap);
return BS;
}
Additionally, I had to deal with the angle. Because I couldn't capture the angled control directly. but my idea is
back up the angle value first.
And restore the control to be non-rotated(RotateTransform = 0.0)
Capture a non-rotated control to a bitmap.
Then rotate the captured bitmap again.
Combine both bitmaps into one.
public static void OverlayControl(ImageSource first, CustomControl CC)
{
if (CC == null)
return;
var visual = new DrawingVisual();
double left = Canvas.GetLeft(CC);
double top = Canvas.GetTop(CC);
// Get control's angle.
double rotationInDegrees = 0.0f;
RotateTransform rotation = CC.RenderTransform as RotateTransform;
if (rotation != null) // Make sure the transform is actually a RotateTransform
{
rotationInDegrees = rotation.Angle; // back up this to temp var.
rotation.Angle = 0.0f; // Set this to 0.0 to capture properly.
}
var second = ControlToBitmap(CC);
using (var dc = visual.RenderOpen())
{
// Draw the background image frist.
dc.DrawImage(first, new Rect(0, 0, first.Width, first.Height));
// Push angle if the control has rotated.
if (rotationInDegrees != 0.0f)
dc.PushTransform(new RotateTransform(rotationInDegrees, left + (CC.Width / 2), top + (CC.Height / 2)));
// transfrom as much as control moved from the origin.
dc.PushTransform(new TranslateTransform(left, top));
// Draw the second image. (captured image from the control)
dc.DrawImage(second, new Rect(0, 0, second.Width, second.Height));
// pop transforms
dc.Pop();
}
var rtb = new RenderTargetBitmap((int)first.Width, (int)first.Height,
96, 96, PixelFormats.Default);
rtb.Render(visual);
// Set as a one combined image.
MainWindow.VM.RenderedImage = rtb;
}
Now, everything seems alright.

Related

InkCanvas not committing strokes before PreviewMouseUp when using touch

I'm using a InkCanvas with a record as you draw feature.
As you can see in this gif, created by drawing using a mouse:
When the user fires a PreviewMouseDown event, I simply start the capture, frame by frame base on a time.
The capture is done by a simple render:
public static RenderTargetBitmap GetRender(this UIElement source, double dpi)
{
var bounds = VisualTreeHelper.GetDescendantBounds(source);
var scale = Math.Round(dpi / 96d, 2);
var width = (bounds.Width + bounds.X) * scale;
var height = (bounds.Height + bounds.Y) * scale;
#region If no bounds
if (bounds.IsEmpty)
{
var control = source as Control;
if (control != null)
{
width = control.ActualWidth * scale;
height = control.ActualHeight * scale;
}
bounds = new Rect(new System.Windows.Point(0d, 0d), new System.Windows.Point(width, height));
}
#endregion
var rtb = new RenderTargetBitmap((int)Math.Round(width), (int)Math.Round(height), dpi, dpi, PixelFormats.Pbgra32);
var dv = new DrawingVisual();
using (var ctx = dv.RenderOpen())
{
var vb = new VisualBrush(source);
var locationRect = new System.Windows.Point(bounds.X, bounds.Y);
var sizeRect = new System.Windows.Size((int)Math.Round(bounds.Width), (int)Math.Round(bounds.Height));
ctx.DrawRectangle(vb, null, new Rect(locationRect, sizeRect));
}
rtb.Render(dv);
return (RenderTargetBitmap)rtb.GetAsFrozen();
}
Now, the problem is that when using touch, for some reason, the strokes are not available when the render occurs. But they are being displayed normally for me:
As you can see, the recorder still captures all necessary frames, but the strokes are only "there" when the PreviewMouseUp event occurs.
What can I do to fix this issue?

Moving UWP InkStrokes for Offscreen Rendering

I am capturing InkStrokes and have a need to create a scaled bitmap image of the strokes in the background. The captured images need to be of uniform size regardless of how big the bounding box of the ink.
For example, if original ink stroke is drawn and the bounding box top/left is 100,100 and size is 200,200 on the ink canvas, I want the ink to start at 0,0 of the new rendered bitmap that is 50,50 size (ignore impact of stroke width right now).
I have figured out how to scale the ink strokes (thanks StackOverflow) but not how to move the strokes. Right now, it seems I have to create a bitmap the size of the InkCanvas, render the scaled ink, then crop bigger image to the correct size.
I've tried using the InkStroke.PointTranslate via
var scaleMatrix = Matrix3x2.CreateScale(scale);
scaleMatrix.Translation = -offset; // top/left of ink stroke bounding box
stroke.PointTransform = scaleMatrix;
But the coordinates do not come out correct.
Any help much appreciated.
You can combine transformations by multiplying matrices. This works for me
var strokes = inkCanvas.InkPresenter.StrokeContainer.GetStrokes();
var boundingBox = inkCanvas.InkPresenter.StrokeContainer.BoundingRect;
var matrix1 = Matrix3x2.CreateTranslation((float)-boundingBox.X, (float)-boundingBox.Y);
var matrix2 = Matrix3x2.CreateScale(0.5f);
var builder = new InkStrokeBuilder();
var newStrokeList = new List<InkStroke>();
foreach (var stroke in strokes)
{
newStrokeList.Add(builder.CreateStrokeFromInkPoints
(stroke.GetInkPoints(), matrix1 * matrix2));
}
//Add the translated and scaled strokes to the inkcanvas
inkCanvas.InkPresenter.StrokeContainer.AddStrokes(newStrokeList);
Maybe I was still doing something wrong, but it appears you cannot use InkStrokeBuilder.CreateStrokeFromInkPoints with more than one kind of transform. I tried all kinds of combinations/approaches, and just could not get it to work.
Here is my solution...
private static IList<InkStroke> GetScaledAndTransformedStrokes(IList<InkStroke> strokeList, float scale)
{
var builder = new InkStrokeBuilder();
var newStrokeList = new List<InkStroke>();
var boundingBox = strokeList.GetBoundingBox();
foreach (var singleStroke in strokeList)
{
var translateMatrix = new Matrix(1, 0, 0, 1, -boundingBox.X, -boundingBox.Y);
var newInkPoints = new List<InkPoint>();
var originalInkPoints = singleStroke.GetInkPoints();
foreach (var point in originalInkPoints)
{
var newPosition = translateMatrix.Transform(point.Position);
var newInkPoint = new InkPoint(newPosition, point.Pressure, point.TiltX, point.TiltY, point.Timestamp);
newInkPoints.Add(newInkPoint);
}
var newStroke = builder.CreateStrokeFromInkPoints(newInkPoints, new Matrix3x2(scale, 0, 0, scale, 0, 0));
newStrokeList.Add(newStroke);
}
return newStrokeList;
}
I ended up having to apply my own translate transform then use the builder.CreateStrokeFromInkPoints with a scale matrix applied to get the results I wanted. GetBoundingBox is my own extension:
public static class RectExtensions
{
public static Rect CombineWith(this Rect r, Rect rect)
{
var top = (r.Top < rect.Top) ? r.Top : rect.Top;
var left = (r.Left < rect.Left) ? r.Left : rect.Left;
var bottom = (r.Bottom < rect.Bottom) ? rect.Bottom : r.Bottom;
var right = (r.Right < rect.Right) ? rect.Right : r.Right;
var newRect = new Rect(new Point(left, top), new Point(right, bottom));
return newRect;
}
}

RenderTargetBitmap.Render() won't render the opacity mask

I'm working to make a ColorPicker for my WPF app (I know about Extended Toolkit) and I've copied the xaml from this and it's working well but now I need to get the color is pointing the mouse. As hue is inside Canvas I've tried this:
Rectangle o = sender as Rectangle;
Canvas picker = ((Canvas)((VisualBrush)o.Fill).Visual);
Point point = e.GetPosition(picker);
RenderTargetBitmap render = new RenderTargetBitmap((int)picker.RenderSize.Width, (int)picker.RenderSize.Height, 96, 96, PixelFormats.Default);
render.Render(picker);
image.Source = render;
if ((point.X <= render.PixelWidth) && (point.Y <= render.PixelHeight)) {
var crop = new CroppedBitmap(render, new Int32Rect((int)point.X, (int)point.X, 1, 1));
var pixels = new byte[4];
crop.CopyPixels(pixels, 4, 0);
selectedColor = new SolidColorBrush(Color.FromArgb(255, pixels[2], pixels[1], pixels[0]));
selectedColorRect.Fill = selectedColor;
}
}
on my mouse move event but render.Render(picker) won't render the opacity mask used to make the gradient.
EDIT
The left is the original rectangle and in the right side is the result of the render.Render(picker); image.Source = render;, as you can see the right side is not applying the opacity mask.

Issues with Rendering a Bitmap

I am currently working on a histogram renderer that renders bitmaps onto the Grasshopper canvas. There are a total of two bitmaps, both of them explained below
private readonly Bitmap _image;
and:
private readonly Bitmap _overlayedImage;
The Bitmap instance with the name _image looks like this:
_bitmap http://puu.sh/6mUk4/20b879710a.png
While the Bitmap instance with the name _overlayedImage looks like this:
Basically, _overlayedImage is a bitmap that is created using the _image bitmap, and as the name suggests, overlays the text (that you can see in the image I posted) and adds a black background to it. This is how it is assigned
_overlayedImage = overlayBitmap(_image, width * 3, height * 3, times, dates, colors);
(The * 3 is used to resize the image).
An issue I currently have is multi-fold.
Using this method, I am able to render _image onto the canvas.
The code is like this:
protected override void Render(Grasshopper.GUI.Canvas.GH_Canvas canvas, Graphics graphics, Grasshopper.GUI.Canvas.GH_CanvasChannel channel) {
// Render the default component.
base.Render(canvas, graphics, channel);
// Now render our bitmap if it exists.
if (channel == Grasshopper.GUI.Canvas.GH_CanvasChannel.Wires) {
var comp = Owner as KT_HeatmapComponent;
if (comp == null)
return;
List<HeatMap> maps = comp.CachedHeatmaps;
if (maps == null)
return;
if (maps.Count == 0)
return;
int x = Convert.ToInt32(Bounds.X + Bounds.Width / 2);
int y = Convert.ToInt32(Bounds.Bottom + 10);
for (int i = 0; i < maps.Count; i++) {
Bitmap image = maps[i].overlayedImage;
if (image == null)
continue;
Rectangle mapBounds = new Rectangle(x, y, maps[i].Width, maps[i].Height);
mapBounds.X -= mapBounds.Width / 2;
Rectangle edgeBounds = mapBounds;
GH_Capsule capsule = GH_Capsule.CreateCapsule(edgeBounds, GH_Palette.Normal);
capsule.Render(graphics, Selected, false, false);
capsule.Dispose();
graphics.DrawImage(image, mapBounds);
graphics.DrawRectangle(Pens.Black, mapBounds);
// some graphics interpolation and bicubic methods
y = edgeBounds.Bottom - (mapBounds.Height) - 4;
}
}
}
As per what comp.CachedHeatmaps; is:
private readonly List<HeatMap> _maps = new List<HeatMap>();
internal List<HeatMap> CachedHeatmaps {
get { return _maps; }
}
However, whenever I try to use Render() on the _overlayedImage, I am unable to do so.
I have isolated the issue to the Render() method, and it seems this line
Rectangle mapBounds = new Rectangle(x, y, maps[i].Width, maps[i].Height); is the main issue, as maps[i].Width and maps[i].Height returns 1 and 100 respectively, which are coincidentally the dimensions of the legend, which are 100 pixels vertically and 1 pixel horizontally.
I apologize for the decently long question, but I don't think I could have explained it any other way.
It turns out there are two issues:
In my main method I used _overlayedImage.Dispose(), which effectively destroyed the image before it was even displayed onto the canvas.
Also, my issue isolation was also correct. This line resulted in the thing rendering correctly:
Rectangle mapBounds = new Rectangle(x, y, maps[i].overlayedImage.Width, maps[i].overlayedImage.Height);
Resulting component:

Canvas background for retrieving color

I have a canvas with a background set to be lineargradientbrush....how do I then extract the color from this background at a particular mouse point (x,y)?
I can do this with a BitmappedImage fine...as this deals with pixels, not sure about a canvas though...
Thanks greatly in advance,
U.
The code posted by Ray Burns didn't work for me but it did lead me down the right path. After some research and experimentation I located the problems to be the bitmap.Render(...) implementation and the Viewbox it uses.
Note: I'm using .Net 3.5 and WPF so maybe his code works in other versions of .Net.
The comments were left here intentionally to help explain the code.
As you can see the Viewbox needs to be normalized with respect to the source Visual Height and Width.
The DrawingVisual needs to be drawn using the DrawingContext before it can be rendered.
In the RenderTargetBitmap method I tried both PixelFormats.Default and PixelFormats.Pbgra32. My testing results were the same with both of them.
Here is the code.
public static Color GetPixelColor(Visual visual, Point pt)
{
Point ptDpi = getScreenDPI(visual);
Size srcSize = VisualTreeHelper.GetDescendantBounds(visual).Size;
//Viewbox uses values between 0 & 1 so normalize the Rect with respect to the visual's Height & Width
Rect percentSrcRec = new Rect(pt.X / srcSize.Width, pt.Y / srcSize.Height,
1 / srcSize.Width, 1 / srcSize.Height);
//var bmpOut = new RenderTargetBitmap(1, 1, 96d, 96d, PixelFormats.Pbgra32); //assumes 96 dpi
var bmpOut = new RenderTargetBitmap((int)(ptDpi.X / 96d),
(int)(ptDpi.Y / 96d),
ptDpi.X, ptDpi.Y, PixelFormats.Default); //generalized for monitors with different dpi
DrawingVisual dv = new DrawingVisual();
using (DrawingContext dc = dv.RenderOpen())
{
dc.DrawRectangle(new VisualBrush { Visual = visual, Viewbox = percentSrcRec },
null, //no Pen
new Rect(0, 0, 1d, 1d) );
}
bmpOut.Render(dv);
var bytes = new byte[4];
int iStride = 4; // = 4 * bmpOut.Width (for 32 bit graphics with 4 bytes per pixel -- 4 * 8 bits per byte = 32)
bmpOut.CopyPixels(bytes, iStride, 0);
return Color.FromArgb(bytes[0], bytes[1], bytes[2], bytes[3]);
}
If you are interested in the getScreenDPI() function the code is:
public static Point getScreenDPI(Visual v)
{
//System.Windows.SystemParameters
PresentationSource source = PresentationSource.FromVisual( v );
Point ptDpi;
if (source != null)
{
ptDpi = new Point( 96.0 * source.CompositionTarget.TransformToDevice.M11,
96.0 * source.CompositionTarget.TransformToDevice.M22 );
}
else
ptDpi = new Point(96d, 96d); //default value.
return ptDpi;
}
And the usage is similar to Ray's. I show it here for a MouseDown on a canvas.
private void cvsTest_MouseDown(object sender, MouseButtonEventArgs e)
{
Point ptClicked = e.GetPosition(cvsTest);
if (e.LeftButton.Equals(MouseButtonState.Pressed))
{
Color pxlColor = ImagingTools.GetPixelColor(cvsTest, ptClicked);
MessageBox.Show("Color String = " + pxlColor.ToString());
}
}
FYI, ImagingTools is the class where I keep static methods related to imaging.
WPF is vector based so it doesn't really have any concept of a "pixel" except within a bitmap data structure. However you can determine the average color of a rectangular area, including a 1x1 rectangular area (which generally comes out as a single pixel on the physical screen).
Here's how to do this:
public Color GetPixelColor(Visual visual, int x, int y)
{
return GetAverageColor(visual, new Rect(x,y,1,1));
}
public Color GetAverageColor(Visual visual, Rect area)
{
var bitmap = new RenderTargetBitmap(1,1,96,96,PixelFormats.Pbgra32);
bitmap.Render(
new Rectangle
{
Width = 1, Height = 1,
Fill = new VisualBrush { Visual = visual, Viewbox = area }
});
var bytes = new byte[4];
bitmap.CopyPixels(bytes, 1, 0);
return Color.FromArgb(bytes[0], bytes[1], bytes[2], bytes[3]);
}
Here is how you would use it:
Color pixelColor = GetPixelColor(canvas, x, y);
The way this code works is:
It fills a 1x1 Rectangle using a VisualBrush that shows the selected area of the canvas
It renders this Rectangle on to a 1-pixel bitmap
It gets the pixel color from the rendered bitmap
On Microsoft Support, there is this article about finding the color of the pixel at the mouse cursor:
http://support.microsoft.com/kb/892462

Categories