In the app I'm trying to develop a key part is getting the position of where the user has touched. First I thought of using a tap gesture recognizer but after a quick google search I learned that was useless (See here for an example).
Then I believe I discovered SkiaSharp and after learning how to use it, at least somewhat, I'm still not sure how I get the proper coordinates of a touch. Here are sections of the code in my project that are relevant to the problem.
Canvas Touch Function
private void canvasView_Touch(object sender, SKTouchEventArgs e)
{
// Only carry on with this function if the image is already on screen.
if(m_isImageDisplayed)
{
// Use switch to get what type of action occurred.
switch (e.ActionType)
{
case SKTouchAction.Pressed:
TouchImage(e.Location);
// Update simply tries to draw a small square using double for loops.
m_editedBm = Update(sender);
// Refresh screen.
(sender as SKCanvasView).InvalidateSurface();
break;
default:
break;
}
}
}
Touch Image
private void TouchImage(SKPoint point)
{
// Is the point in range of the canvas?
if(point.X >= m_x && point.X <= (m_editedCanvasSize.Width + m_x) &&
point.Y >= m_y && point.Y <= (m_editedCanvasSize.Height + m_y))
{
// Save the point for later and set the boolean to true so the algorithm can begin.
m_clickPoint = point;
m_updateAlgorithm = true;
}
}
Here I'm just seeing or TRYING to see if the point clicked was in range of the image and I made a different SkSize variable to help. Ignore the boolean, not that important.
Update function (function that attempts to draw ON the point pressed so it's the most important)
public SKBitmap Update(object sender)
{
// Create the default test color to replace current pixel colors in the bitmap.
SKColor color = new SKColor(255, 255, 255);
// Create a new surface with the current bitmap.
using (var surface = new SKCanvas(m_editedBm))
{
/* According to this: https://learn.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/graphics/skiasharp/paths/finger-paint ,
the points I have to start are in Xamarin forms coordinates, but I need to translate them to SkiaSharp coordinates which are in
pixels. */
Point pt = new Point((double)m_touchPoint.X, (double)m_touchPoint.Y);
SKPoint newPoint = ConvertToPixel(pt);
// Loop over the touch point start, then go to a certain value (like x + 100) just to get a "block" that's been altered for pixels.
for (int x = (int)newPoint.X; x < (int)newPoint.X + 200.0f; ++x)
{
for (int y = (int)newPoint.Y; y < (int)newPoint.Y + 200.0f; ++y)
{
// According to the x and y, change the color.
m_editedBm.SetPixel(x, y, color);
}
}
return m_editedBm;
}
}
Here I'm THINKING that it'll start, you know, at the coordinate I pressed (and these coordinates have been confirmed to be within the range of the image thanks to the function "TouchImage". And when it does get the correct coordinates (or at least it SHOULD of done that) the square will be drawn one "line" at a time. I have a game programming background so this kind of sounds simple but I can't believe I didn't get this right the first time.
Also I have another function, it MIGHT prove worthwhile because the original image is rotated and then put on screen. Why? Well by default the image, after taking the picture, and then displayed, is rotated to the left. I had no idea why but I corrected it with the following function:
// Just rotate the image because for some reason it's titled 90 degrees to the left.
public static SKBitmap Rotate()
{
using (var bitmap = m_bm)
{
// The new ones width IS the old ones height.
var rotated = new SKBitmap(bitmap.Height, bitmap.Width);
using (var surface = new SKCanvas(rotated))
{
surface.Translate(rotated.Width, 0.0f);
surface.RotateDegrees(90);
surface.DrawBitmap(bitmap, 0, 0);
}
return rotated;
}
}
I'll keep reading and looking up stuff on what I'm doing wrong, but if any help is given I'm grateful.
Related
I've started using LightningChart in my real time monitoring application. In my app there are many y axis which use segmented layout (one y axis per segment):
mainChart.ViewXY.AxisLayout.YAxesLayout = YAxesLayout.Segmented;
My goal is that when you mouse click a segment, it gets larger compared to other segments (kinda like zoom effect) and the other segments gets smaller. When you click it again it goes back to normal.
I know I can change the size of the segments with:
mainChart.ViewXY.AxisLayout.Segments[segmentNumber].Height = someValue;
That takes care of the zooming effect.
Now the problem is that how can I solve which segment was actually clicked? I figured out that you get mouse position via MouseClick -event (e.MousePos) but that seem to give only the screen coordinates so i'm not sure that it helps.
I'm using the LightningChart version 8.4.2
You are correct that getting mouse position via MouseClick event is the key here. The screen coordinates you get via e.GetPosition (not e.MousePos) can be converted to chart axis values with CoordToValue() -method. Then you just compare the y-coordinate to each y-axis minimum/maximum value to find out what segment was clicked. Here is an example:
_chart.MouseClick += _chart_MouseClick;
private void _chart_MouseClick(object sender, MouseButtonEventArgs e)
{
var mousePos = e.GetPosition(_chart).Y;
double axisPos = 0;
bool isWithinYRange = false;
foreach (AxisY ay in _chart.ViewXY.YAxes)
{
ay.CoordToValue((float)mousePos, out axisPos, true);
if (axisPos >= ay.Minimum && axisPos <= ay.Maximum)
{
// Segment clicked, get the index via ay.SegmentIndex;
isWithinYRange = true;
}
}
if (!isWithinYRange)
{
// Not in any segment
}
}
After finding out the segment index, you can modify its height as you described:
_chart.ViewXY.AxisLayout.Segments[0].Height = 1.5;
Note Height means segment height compared to other segments.
Hope this is helpful.
It seems that WPF's InkCanvas is only able to provide the points of the stroke (independent of the width and height of the stroke). For an application, I need to know all the points that are drawn by the InkCanvas.
For instance, assume that the width and height of the stroke are 16. Using this stroke size I paint a dot on the InkCanvas. Is there a straightforward way to obtain all 256 pixels in this dot (and not the center point of this giant dot alone)?
Why I care:
In my application, the user uses an InkCanvas to draw on top of a Viewport3D which is displaying a few 3D objects. I want to use all the points of the strokes to perform ray casting and determine which objects in the Viewport3D have been overlaid by the user's strokes.
I found a very dirty way of handling this. If anyone knows of a better method, I'll be more than happy to upvote and accept their response as an answer.
Basically my method involves getting the Geometry of each stroke, traversing all the points inside the boundaries of that geometry and determining whether the point is inside the geometry or not.
Here's the code that I am using now:
foreach (var stroke in inkCanvas.Strokes)
{
List<Point> pointsInside = new List<Point>();
Geometry sketchGeo = stroke.GetGeometry();
Rect strokeBounds = sketchGeo.Bounds;
for (int x = (int)strokeBounds.TopLeft.X; x < (int)strokeBounds.TopRight.X + 1; x++)
for (int y = (int)strokeBounds.TopLeft.Y; y < (int)strokeBounds.BottomLeft.Y + 1; y++)
{
Point p = new Point(x, y);
if (sketchGeo.FillContains(p))
pointsInside.Add(p);
}
}
You can use the StrokeCollection's HitTest method. I've compared the performance of your solution with this implementation and found the HitTest method performs better. Your mileage etc.
// get our position on our parent.
var ul = TranslatePoint(new Point(0, 0), this.Parent as UIElement);
// get our area rect
var controlArea = new Rect(ul, new Point(ul.X + ActualWidth, ul.Y + ActualHeight));
// hit test for any strokes that have at least 5% of their length in our area
var strokes = _formInkCanvas.Strokes.HitTest(controlArea, 5);
if (strokes.Any())
{
// do something with this new knowledge
}
You can find the documentation here:
https://learn.microsoft.com/en-us/dotnet/api/system.windows.ink.strokecollection.hittest?view=netframework-4.7.2
Further, if you only care if any point is in your rect, you can use the code below. It's an order of magnitude faster than StrokeCollection.HitTest because it doesn't care about percentages of strokes so it does a lot less work.
private bool StrokeHitTest(Rect bounds, StrokeCollection strokes)
{
for (int ix = 0; ix < strokes.Count; ix++)
{
var stroke = strokes[ix];
var stylusPoints = stroke.DrawingAttributes.FitToCurve ?
stroke.GetBezierStylusPoints() :
stroke.StylusPoints;
for (int i = 0; i < stylusPoints.Count; i++)
{
if (bounds.Contains((Point)stylusPoints[i]))
{
return true;
}
}
}
return false;
}
I'm experiencing a discrepancy between a GraphicsPath drawn in World coordinates on a UserControl and the results of GraphicsPath.IsVisible() to Hit Test the shape with the mouse.
I performed a little test that made a map of where IsVisible() returned true, relative to the GraphicsPath shape that was drawn. The results show a very "low resolution" version of the shape I'm drawing.
Link to shared Google Drive image showing the results:
http://goo.gl/zd6xiM
Is there something I'm doing or not doing correctly that's causing this?
Thanks!
Here's the majority of my OnMouseMove() event handler:
protected override void OnMouseMove(MouseEventArgs e)
{
//base.OnMouseMove(e);
debugPixel = Point.Empty;
PointF worldPosition = ScreenToWorld(PointToClient(Cursor.Position));
if (_mouseStart == Point.Empty) // Just moving mouse around, no buttons pressed
{
_objectUnderMouse = null;
// Hit test mouse position against each canvas object to see if we're overtop of anything
for (int index = 0; index < _canvasObjects.Count; index++) // Uses front to back order
{
NPCanvasObject canvasObject = _canvasObjects[index];
if (canvasObject is NPCanvasPart)
{
NPCanvasPart canvasPart = (canvasObject as NPCanvasPart);
NPPart part = canvasPart.Part;
GraphicsPath gp = canvasPart.GraphicsPath;
// Set the object under the mouse cursor, and move it to the "front" so it draws on top of everythign else
if (gp.IsVisible(worldPosition))
{
// DEBUG
debugPixel.X = e.X;
debugPixel.Y = e.Y;
_objectUnderMouse = canvasObject;
_canvasObjects.MoveItemAtIndexToFront(_canvasObjects.IndexOf(canvasObject));
break; // Since we're modifying the collection we're iterating through, we can't reliably continue past this point
}
}
}
}
else
{
...
}
}
Later in my drawing code I draw a pixel whenever debugPixel != Point.Empty . I temporarily suppressed clearing before drawing so I could see them all.
Some other info that may be asked, or could be helpful to troubleshoot:
I've tried different Graphics.InterpolationMode settings but that doesn't seem to have any effect
I've applied a TranslateTransform and ScaleTransform to the main drawing Graphics but the underlying HitTest map seems to scale and translate equal to the GraphicsPath
For my main drawing canvas, Graphics.PageUnit = GraphicsUnit.Inch, except when I'm doing pixel-based overlay stuff
I thought I had researched this thoroughly enough, but apparently not. Shortly after posting this question I did another search with slightly different terms and found this:
http://vbcity.com/forums/t/72877.aspx
...which was enough to clue me in that the GraphicsPath and my main drawing Graphics were not the same. Using the overloaded GraphicsPath.IsVisible(PointF, Graphics) solved this problem very nicely.
Essentially it was trying to check against a very aliased (pixelated) version of my shape that had been scaled to the same size but not smoothed.
First time I ever ask a question here so correct me if I´m doing it wrong.
Picture of my chess set:
Every time I move a piece it lags for about 1 second. Every piece and tile has an Image and there is exactly 96 Images. Every time I move a piece it clears everything with black and then update the graphics.
In the early stages of the chess I didn't have any Images and used different colors instead and only a few pieces there was no noticeable lag and the piece moved in an instant.
public void updateGraphics(PaintEventArgs e, Graphics g, Bitmap frame)
{
g = Graphics.FromImage(frame);
g.Clear(Color.Black);
colorMap(g);
g.Dispose();
e.Graphics.DrawImageUnscaled(frame, 0, 0);
}
The function colorMap(g) looks like this:
private void colorMap(Graphics g)
{
for (int y = 0; y < SomeInts.amount; y++)
{
for (int x = 0; x < SomeInts.amount; x++)
{
//Tiles
Bundle.tile[x, y].colorBody(g, x, y);
//Pieces
player1.colorAll(g);
player2.colorAll(g);
}
}
}
The colorAll function executes every pieces colorBody(g) function which look like this:
public void colorBody(Graphics g)
{
//base.colorBody() does the following: body = new Rectangle(x * SomeInts.size + SomeInts.size / 4, y * SomeInts.size + SomeInts.size / 4, size, size);
base.colorBody();
if (team == 1)
{
//If its a white queen
image = Image.FromFile("textures/piece/white/queen.png");
}
if (team == 2)
{
//If its a black queen
image = Image.FromFile("textures/piece/black/queen.png");
}
g.DrawImage(image, body);
}
and finaly the function that moves the piece:
public void movePiece(MouseEventArgs e)
{
for (int y = 0; y < SomeInts.amount; y++)
{
for (int x = 0; x < SomeInts.amount; x++)
{
if (Bundle.tile[x, y].body.Contains(e.Location))
{
//Ignore this
for (int i = 0; i < queens.Count; i++)
{
Queen temp = queens.ElementAt<Queen>(i);
temp.move(x, y);
}
//Relevant
player1.move(x, y);
player2.move(x, y);
}
}
}
}
Thank you for reading all this! I could make a link to the whole program if my coding examples is not enough.
You're calling Image.FromFile in every refresh, for every image - effectively reloading every image file every time from disk.
Have you considered loading the images once, and storing the resulting Images somewhere useful? (say, an array, Image[2,6] would be adequate)
Why do you redraw the board each time? Can't you just leave the board where it is and display an image with transparent background over it? That way you have one image as a background (the board), plus 64 smaller images placed over the board in a grid and just change the image being displayed on each move.
That way, you can let windows handle the drawing...
Also, load the images of the pieces at the start of the application.
In addition to not calling Image.FromFile() inside updateGraphics() (which is definitely your biggest issue), you shouldn't be attempting to redraw the entire board every on every call to updateGraphics() - most of the time, only a small portion of the board will be invalidated.
The PaintEventArgs contains an parameter, ClipRectangle, which specifies which portion of the board needs redrawing. See if you can't figure out which tiles intersect with that rectangle, and only redraw those tiles :)
Hint: Write a function Point ScreenToTileCoords(Point) which takes a screen coordinate and returns which board-tile is at that coordinate. Then the only tiles you need to redraw are
Point upperLeftTileToBeDrawn = ScreenToTileCoords(e.ClipRectangle.Left, e.ClipRectangle.Top);
Point lowerRightTileToBeDrawn = ScreenToTileCoords(e.ClipRectangle.Right - 1, e.ClipRectangle.Bottom- 1);
Also, make sure your control is double-buffered, to avoid tearing. This is much simpler than #Steve B's link in the comments above states; assuming this is a UserControl, simply set
this.DoubleBuffered = true;
Well, what about this:
Do not clear the whole board but only those parts that need to be cleared.
Alternative:
Update to WPF - it moves drawing to the graphics card - and just move pieces around, in a smart way (i.e. have a control / object for every piece).
using Cairo;
I have drawn a rectangle inside a bigger rectangle witch is inside a drawing area.
I have managed to attach a event to the Drawing area witch is a object I have extended from it
this.AddEvents ((int) EventMask.ButtonPressMask);
this.ButtonPressEvent += delegate(object o, ButtonPressEventArgs args) {
hasInterface(args.Event.X, args.Event.Y);
Console.WriteLine("Button Pressed " + args.Event.X + ", " + args.Event.Y);
};
I'm dynamically drawing the squares using:
cr.Translate(width/2, height/2);
cr.Rectangle((pX + (i * tmp)) , pY, boxsize, boxsize);
private void recordPosition(double x, double y)
{
x = x*2;
y = y*2;
boxCoordinates.Add( new double[,]
{
{x, y}
}
); // store coords
}
List<double,double> boxCoordinates
So for the inside of the drawing area the square is drawn at x=0, y=0 from the "outside" point of view it's in x=90, y=45; the width = 180 , height = 100
I was using translate (since half of this is copied ) of the size/2 so this means that the drawing area was doing a resize of the square, to solve this issue I was saving the position's multiplying it by 2, but this is not working has I'm getting "hits" outside of the rectangle drawn.
What is the best way to do this? I mean to translate the X Y positions from the window to the drawing area, I saw this was possible in other languages but not sure how to do it in C# and the drawing area from mono.
Thanks for any help.
I've done this a few times in C w SDL and C# with Cairo, basically, you want to be able to convert the bounding box of each of your rectangles to and from the coordinates you are using for rendering on the cairo canvas.
For each of your rectangles, you'll have the location you of your rectangles in thier own world. I like to call these the "world coordinates" and their "screen coordinates" (which map to where your mouse will be).
You can store the world coordinates of each box and then translate them to screen ones for each frame you render.
public class Shape {
public Point WorldLoc { get; set; }
}
You would do all your physics (if you have any) on the WorldLoc values. When you come to render, You want to be able to convert your WorldLoc to ScreenLoc.
public class Scene {
public double Zoom;
public Point Offset;
public Point WorldToScreen( Point world ){
var p = new Point();
p.x = (world.x - Offset.x) * Zoom;
p.y = (world.y - Offset.y) * Zoom;
return p;
}
}
Each time you render somthing in this Scene, you'll use WorldToScreen() to get the screen coordinates. You can then use the same thing to work out if your mouse is in the screen box of a world box.