WPF Image Collison Detection - c#

I have some code which detects collision ;
public bool DetectCollision(ContentControl ctrl1, ContentControl ctrl2)
{
Rect ctrl1Rect = new Rect(
new Point(Convert.ToDouble(ctrl1.GetValue(Canvas.LeftProperty)),
Convert.ToDouble(ctrl1.GetValue(Canvas.TopProperty))),
new Point((Convert.ToDouble(ctrl1.GetValue(Canvas.LeftProperty)) + ctrl1.ActualWidth),
(Convert.ToDouble(ctrl1.GetValue(Canvas.TopProperty)) + ctrl1.ActualHeight)));
Rect ctrl2Rect = new Rect(
new Point(Convert.ToDouble(ctrl2.GetValue(Canvas.LeftProperty)),
Convert.ToDouble(ctrl2.GetValue(Canvas.TopProperty))),
new Point((Convert.ToDouble(ctrl2.GetValue(Canvas.LeftProperty)) + ctrl2.ActualWidth),
(Convert.ToDouble(ctrl2.GetValue(Canvas.TopProperty)) + ctrl2.ActualHeight)));
ctrl1Rect.Intersect(ctrl2Rect);
return !(ctrl1Rect == Rect.Empty);
}
It detects when 2 rectangles are over. There are images in the given parameter ContentControls. I want to be able to detect if those images intersects not the rectangels. Following images shows whatn I want ;

Then you are not looking for rectangular collision detection but actually pixel-level collision detection and that is going to be much more processing intensive.
On top of the rectangular collision detection that you already have implemented you will have to examine each pixel of both images in the overlapping rectangular region.
In the simplest case, if both of two overlapping pixels have non transparent color then you have a collision.
If you want to complicate things you may want to add thresholds such as: requiring a percentage of overlapping pixels in order to trigger a collision; or setting a threshold for the combined alpha level of the pixels instead of using any non zero value.

You can try converting your images as a geometry object and then you can check if they are colliding correctly. But these images should be as a vector image. To convert images to a vector image, you can check this open source project.
public static Point[] GetIntersectionPoints(Geometry g1, Geometry g2)
{
Geometry og1 = g1.GetWidenedPathGeometry(new Pen(Brushes.Black, 1.0));
Geometry og2 = g2.GetWidenedPathGeometry(new Pen(Brushes.Black, 1.0));
CombinedGeometry cg = new CombinedGeometry(GeometryCombineMode.Intersect, og1, og2);
PathGeometry pg = cg.GetFlattenedPathGeometry();
Point[] result = new Point[pg.Figures.Count];
for (int i = 0; i < pg.Figures.Count; i++)
{
Rect fig = new PathGeometry(new PathFigure[] { pg.Figures[i] }).Bounds;
result[i] = new Point(fig.Left + fig.Width / 2.0, fig.Top + fig.Height / 2.0);
}
return result;
}

Related

SkiaSharp Touch Bitmap Image

In the app I'm trying to develop a key part is getting the position of where the user has touched. First I thought of using a tap gesture recognizer but after a quick google search I learned that was useless (See here for an example).
Then I believe I discovered SkiaSharp and after learning how to use it, at least somewhat, I'm still not sure how I get the proper coordinates of a touch. Here are sections of the code in my project that are relevant to the problem.
Canvas Touch Function
private void canvasView_Touch(object sender, SKTouchEventArgs e)
{
// Only carry on with this function if the image is already on screen.
if(m_isImageDisplayed)
{
// Use switch to get what type of action occurred.
switch (e.ActionType)
{
case SKTouchAction.Pressed:
TouchImage(e.Location);
// Update simply tries to draw a small square using double for loops.
m_editedBm = Update(sender);
// Refresh screen.
(sender as SKCanvasView).InvalidateSurface();
break;
default:
break;
}
}
}
Touch Image
private void TouchImage(SKPoint point)
{
// Is the point in range of the canvas?
if(point.X >= m_x && point.X <= (m_editedCanvasSize.Width + m_x) &&
point.Y >= m_y && point.Y <= (m_editedCanvasSize.Height + m_y))
{
// Save the point for later and set the boolean to true so the algorithm can begin.
m_clickPoint = point;
m_updateAlgorithm = true;
}
}
Here I'm just seeing or TRYING to see if the point clicked was in range of the image and I made a different SkSize variable to help. Ignore the boolean, not that important.
Update function (function that attempts to draw ON the point pressed so it's the most important)
public SKBitmap Update(object sender)
{
// Create the default test color to replace current pixel colors in the bitmap.
SKColor color = new SKColor(255, 255, 255);
// Create a new surface with the current bitmap.
using (var surface = new SKCanvas(m_editedBm))
{
/* According to this: https://learn.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/graphics/skiasharp/paths/finger-paint ,
the points I have to start are in Xamarin forms coordinates, but I need to translate them to SkiaSharp coordinates which are in
pixels. */
Point pt = new Point((double)m_touchPoint.X, (double)m_touchPoint.Y);
SKPoint newPoint = ConvertToPixel(pt);
// Loop over the touch point start, then go to a certain value (like x + 100) just to get a "block" that's been altered for pixels.
for (int x = (int)newPoint.X; x < (int)newPoint.X + 200.0f; ++x)
{
for (int y = (int)newPoint.Y; y < (int)newPoint.Y + 200.0f; ++y)
{
// According to the x and y, change the color.
m_editedBm.SetPixel(x, y, color);
}
}
return m_editedBm;
}
}
Here I'm THINKING that it'll start, you know, at the coordinate I pressed (and these coordinates have been confirmed to be within the range of the image thanks to the function "TouchImage". And when it does get the correct coordinates (or at least it SHOULD of done that) the square will be drawn one "line" at a time. I have a game programming background so this kind of sounds simple but I can't believe I didn't get this right the first time.
Also I have another function, it MIGHT prove worthwhile because the original image is rotated and then put on screen. Why? Well by default the image, after taking the picture, and then displayed, is rotated to the left. I had no idea why but I corrected it with the following function:
// Just rotate the image because for some reason it's titled 90 degrees to the left.
public static SKBitmap Rotate()
{
using (var bitmap = m_bm)
{
// The new ones width IS the old ones height.
var rotated = new SKBitmap(bitmap.Height, bitmap.Width);
using (var surface = new SKCanvas(rotated))
{
surface.Translate(rotated.Width, 0.0f);
surface.RotateDegrees(90);
surface.DrawBitmap(bitmap, 0, 0);
}
return rotated;
}
}
I'll keep reading and looking up stuff on what I'm doing wrong, but if any help is given I'm grateful.

Creating a box in UI image using line renderer not working

I'm planning to create a square inside UI image using line renderer but size is too small that you need to zoom In. but if its outside the UI image its working. Please see attached imaged below
the line renderer component is attached to redkey1spawn object.
Tried derHugo code it works but somehow it overshoots in the screen
Your problem is that the LineRenderer works with coordinates in Unity Units.
A Screenspace Overlay canvas has pixel size scaling so the width and height (in Unity units) match up with the width and height (in Pixels) of the Window.
&rightarrow; since you add 4 points
0, 0, 0
2, 0, 0
2, -2, 0
0, -2, 0
in worldspace it means they actually on the canvas will use e.g. 2px, -2px, 0px &rightarrow; very small.
You could e.g. multiply the sizes by the height or width of the image/canvas.
private void Start()
{
var lineRenderer = GetComponent<LineRenderer>();
var image = GetComponentInParent<RectTransform>();
// get the Unity worldspace coordinates of the images corners
// note: to get the scales like that ofcourse only works
// if the image is never rotated!
var worlsCorners = new Vector3[4];
image.GetWorldCorners(worlsCorners);
var imageWorldSize = new Vector2(Mathf.Abs(worlsCorners[0].x - worlsCorners[2].x), Mathf.Abs(worlsCorners[1].y - worlsCorners[3].y));
var positions = new Vector3[lineRenderer.positionCount];
var pointnum = lineRenderer.GetPositions(positions);
for (var i = 0; i < pointnum; i++)
{
positions[i] = positions[i] * imageWorldSize.x;
}
lineRenderer.SetPositions(positions);
}
Note, however, I'm actually not even sure you will see this LineRenderer since it is not a UI component I'm pretty sure the ScreenSpace Overlay will make every Image etc always render on top of it.

ColorBlend issue - black stripe at the end

I want to create a linear gradient with 7 step colors and custom size - from black, blue, cyan, green, yellow, red to white. My problem is that the final bitmap has a black stripe on the right side. Anyone have an idea what's the matter?
public static List<Color> interpolateColorScheme(int size)
{
// create result list with for interpolated colors
List<Color> colorList = new List<Color>();
// use Bitmap and Graphics from bitmap
using (Bitmap bmp = new Bitmap(size, 200))
using (Graphics G = Graphics.FromImage(bmp))
{
// create empty rectangle canvas
Rectangle rect = new Rectangle(Point.Empty, bmp.Size);
// use LinearGradientBrush class for gradient computation
LinearGradientBrush brush = new LinearGradientBrush
(rect, Color.Empty, Color.Empty, 0, false);
// setup ColorBlend object
ColorBlend colorBlend = new ColorBlend();
colorBlend.Positions = new float[7];
colorBlend.Positions[0] = 0;
colorBlend.Positions[1] = 1 / 6f;
colorBlend.Positions[2] = 2 / 6f;
colorBlend.Positions[3] = 3 / 6f;
colorBlend.Positions[4] = 4 / 6f;
colorBlend.Positions[5] = 5 / 6f;
colorBlend.Positions[6] = 1;
// blend colors and copy them to result color list
colorBlend.Colors = new Color[7];
colorBlend.Colors[0] = Color.Black;
colorBlend.Colors[1] = Color.Blue;
colorBlend.Colors[2] = Color.Cyan;
colorBlend.Colors[3] = Color.Green;
colorBlend.Colors[4] = Color.Yellow;
colorBlend.Colors[5] = Color.Red;
colorBlend.Colors[6] = Color.White;
brush.InterpolationColors = colorBlend;
G.FillRectangle(brush, rect);
bmp.Save("gradient_debug_image_sarcus.png", ImageFormat.Png);
for (int i = 0; i < size; i++) colorList.Add(bmp.GetPixel(i, 0));
brush.Dispose();
}
// return interpolated colors
return colorList;
}
Here is my gradient:
I took your code and tried every size from 2 to ushort.MaxValue, generating the gradient and scanning from the right edge to determine how many black pixels there were.
For many sizes, there are no black pixels. However, for certain consecutive runs of sizes, as the size increases, the number of black pixels also increases. There are approximately 2140 such runs in the tested range. This implies that there is a rounding error in the gradient drawing.
This bug has been encountered before (http://www.pcreview.co.uk/threads/error-on-lineargradientbrush.2165794/). The two solutions that link recommends are to
draw the gradient larger than you need it or
use WrapMode.TileFlipX.
What that link gets wrong is that the rounding error is not just 1 pixel at all times; at large image sizes it can be as large as 127 pixels (in the range I tested). Drawing the gradient larger than you need it requires you to know (or estimate) how much bigger you need to make the gradient. You could try scaling by (size + Math.Ceiling(size / 512.0)) / size, which is an upper bound on the error for the range of image sizes I have tested.
If you're looking for a simpler solution, specifying brush.WrapMode = WrapMode.TileFlipX will cause the brush to draw normally up to the (incorrect) edge of the gradient, then repeat the gradient in reverse until the actual edge of the specified rectangle. Since the rounding error is small compared to the size of the rectangle, this will look like the final color of the gradient has been extended to the edge of the rectangle. Visually, it looks good, but it may be unsuitable if you require very precise results.

WPF InkCanvas access all pixels under the strokes

It seems that WPF's InkCanvas is only able to provide the points of the stroke (independent of the width and height of the stroke). For an application, I need to know all the points that are drawn by the InkCanvas.
For instance, assume that the width and height of the stroke are 16. Using this stroke size I paint a dot on the InkCanvas. Is there a straightforward way to obtain all 256 pixels in this dot (and not the center point of this giant dot alone)?
Why I care:
In my application, the user uses an InkCanvas to draw on top of a Viewport3D which is displaying a few 3D objects. I want to use all the points of the strokes to perform ray casting and determine which objects in the Viewport3D have been overlaid by the user's strokes.
I found a very dirty way of handling this. If anyone knows of a better method, I'll be more than happy to upvote and accept their response as an answer.
Basically my method involves getting the Geometry of each stroke, traversing all the points inside the boundaries of that geometry and determining whether the point is inside the geometry or not.
Here's the code that I am using now:
foreach (var stroke in inkCanvas.Strokes)
{
List<Point> pointsInside = new List<Point>();
Geometry sketchGeo = stroke.GetGeometry();
Rect strokeBounds = sketchGeo.Bounds;
for (int x = (int)strokeBounds.TopLeft.X; x < (int)strokeBounds.TopRight.X + 1; x++)
for (int y = (int)strokeBounds.TopLeft.Y; y < (int)strokeBounds.BottomLeft.Y + 1; y++)
{
Point p = new Point(x, y);
if (sketchGeo.FillContains(p))
pointsInside.Add(p);
}
}
You can use the StrokeCollection's HitTest method. I've compared the performance of your solution with this implementation and found the HitTest method performs better. Your mileage etc.
// get our position on our parent.
var ul = TranslatePoint(new Point(0, 0), this.Parent as UIElement);
// get our area rect
var controlArea = new Rect(ul, new Point(ul.X + ActualWidth, ul.Y + ActualHeight));
// hit test for any strokes that have at least 5% of their length in our area
var strokes = _formInkCanvas.Strokes.HitTest(controlArea, 5);
if (strokes.Any())
{
// do something with this new knowledge
}
You can find the documentation here:
https://learn.microsoft.com/en-us/dotnet/api/system.windows.ink.strokecollection.hittest?view=netframework-4.7.2
Further, if you only care if any point is in your rect, you can use the code below. It's an order of magnitude faster than StrokeCollection.HitTest because it doesn't care about percentages of strokes so it does a lot less work.
private bool StrokeHitTest(Rect bounds, StrokeCollection strokes)
{
for (int ix = 0; ix < strokes.Count; ix++)
{
var stroke = strokes[ix];
var stylusPoints = stroke.DrawingAttributes.FitToCurve ?
stroke.GetBezierStylusPoints() :
stroke.StylusPoints;
for (int i = 0; i < stylusPoints.Count; i++)
{
if (bounds.Contains((Point)stylusPoints[i]))
{
return true;
}
}
}
return false;
}

Monogame Shader Porting Issues

Ok so I ported a game I have been working on over to Monogame, however I'm having a shader issue now that it's ported. It's an odd bug, since it works on my old XNA project and it also works the first time I use it in the new monogame project, but not after that unless I restart the game.
The shader is a very simple shader that looks at a greyscale image and, based on the grey, picks a color from the lookup texture. Basically I'm using this to randomize a sprite image for an enemy every time a new enemy is placed on the screen. It works for the first time an enemy is spawned, but doesn't work after that, just giving a completely transparent texture (not a null texture).
Also, I'm only targeting Windows Desktop for now, but I am planning to target Mac and Linux at some point.
Here is the shader code itself.
sampler input : register(s0);
Texture2D colorTable;
float seed; //calculate in program, pass to shader (between 0 and 1)
sampler colorTableSampler =
sampler_state
{
Texture = <colorTable>;
};
float4 PixelShaderFunction(float2 c: TEXCOORD0) : COLOR0
{
//get current pixel of the texture (greyscale)
float4 color = tex2D(input, c);
//set the values to compare to.
float hair = 139/255; float hairless = 140/255;
float shirt = 181/255; float shirtless = 182/255;
//var to hold new color
float4 swap;
//pixel coordinate for lookup
float2 i;
i.y = 1;
//compare and swap
if (color.r >= hair && color.r <= hairless)
{
i.x = ((0.5 + seed + 96)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r >= shirt && color.r <= shirtless)
{
i.x = ((0.5 + seed + 64)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 1)
{
i.x = ((0.5 + seed + 32)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 0)
{
i.x = ((0.5 + seed)/128);
swap = tex2D(colorTableSampler, i);
}
return swap;
}
technique ColorSwap
{
pass Pass1
{
// TODO: set renderstates here.
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
And here is the function that creates the texture. I should also note that the texture generation works fine without the shader, I just get the greyscale base image.
public static Texture2D createEnemyTexture(GraphicsDevice gd, SpriteBatch sb)
{
//get a random number to pass into the shader.
Random r = new Random();
float seed = (float)r.Next(0, 32);
//create the texture to copy color data into
Texture2D enemyTex = new Texture2D(gd, CHARACTER_SIDE, CHARACTER_SIDE);
//create a render target to draw a character to.
RenderTarget2D rendTarget = new RenderTarget2D(gd, CHARACTER_SIDE, CHARACTER_SIDE,
false, gd.PresentationParameters.BackBufferFormat, DepthFormat.None);
gd.SetRenderTarget(rendTarget);
//set background of new render target to transparent.
//gd.Clear(Microsoft.Xna.Framework.Color.Black);
//start drawing to the new render target
sb.Begin(SpriteSortMode.Immediate, BlendState.Opaque,
SamplerState.PointClamp, DepthStencilState.None, RasterizerState.CullNone);
//send the random value to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["seed"].SetValue(seed);
//send the palette texture to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["colorTable"].SetValue(Graphics.GlobalGfx.palette);
//apply the effect
Graphics.GlobalGfx.colorSwapEffect.CurrentTechnique.Passes[0].Apply();
//draw the texture (now with color!)
sb.Draw(enemyBase, new Microsoft.Xna.Framework.Vector2(0, 0), Microsoft.Xna.Framework.Color.White);
//end drawing
sb.End();
//reset rendertarget
gd.SetRenderTarget(null);
//copy the drawn and colored enemy to a non-volitile texture (instead of render target)
//create the color array the size of the texture.
Color[] cs = new Color[CHARACTER_SIDE * CHARACTER_SIDE];
//get all color data from the render target
rendTarget.GetData<Color>(cs);
//move the color data into the texture.
enemyTex.SetData<Color>(cs);
//return the finished texture.
return enemyTex;
}
And just in case, the code for loading in the shader:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
colorSwapEffect = new Effect(gd, Reader.ReadBytes((int)Reader.BaseStream.Length));
If anyone has ideas to fix this, I'd really appreciate it, and just let me know if you need other info about the problem.
I am not sure why you have "at" (#) sign in front of the string, when you escaped backslash - unless you want to have \\ in your string, but it looks strange in the file path.
You have wrote in your code:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
Unless you want \\ inside your string do
BinaryReader Reader = new BinaryReader(File.Open(#"Content\shaders\test.mgfx", FileMode.Open));
or
BinaryReader Reader = new BinaryReader(File.Open("Content\\shaders\\test.mgfx", FileMode.Open));
but do not use both.
I don't see anything super obvious just reading through it, but really this could be tricky for someone to figure out just looking at your code.
I'd recommend doing a graphics profile (via visual studio) and capturing the frame which renders correctly then the frame rendering incorrectly and comparing the state of the two.
Eg, is the input texture what you expect it to be, are pixels being output but culled, is the output correct on the render target (in which case the problem could be Get/SetData), etc.
Change ps_2_0 to ps_4_0_level_9_3.
Monogame cannot use shaders built on HLSL 2.
Also the built in sprite batch shader uses ps_4_0_level_9_3 and vs_4_0_level_9_3, you will get issues if you try to replace the pixel portion of a shader with a different level shader.
This is the only issue I can see with your code.

Categories