leaflet set rectangle coordinated from mouse events - c#

I'm trying to allow the user to be able to set one corner of a rectangle to draw on the mouse down event, when the mouse up event triggers i'd like to set the opposing corner coordinates and draw the rectangle. I have tried the following in my *.ASPX javascript:
var oneCorner;
var TwoCroner;
map.on('mousedown', setOneCorner);
map.on('mouseup', setTwoCorner);
function setOneCorner(e)
{
oneCorner = e.latlng;
}
function setTwoCorner(e)
{
twoCorner = e.latlng;
var bounds = [oneCorner.latlng, twoCorner.latlng];
L.rectangle(bounds, {color:"#ff7800", weight:1}).addTo(map);
}
my tiles can still pan on the mouse down event but I'd like to be able to draw a rectangle where ever i want. how should i go bout doing this?

If you dont want your map to pan., then you can add
map.dragging.disable()
to your code. Also your array with bounds should be var bounds = [oneCorner, twoCorner];
because corners variables are already LatLng objects.
Full code would be:
var oneCorner;
var TwoCroner;
map.on('mousedown', setOneCorner);
map.on('mouseup', setTwoCorner);
map.dragging.disable();
function setOneCorner(e)
{
oneCorner = e.latlng;
}
function setTwoCorner(e)
{
twoCorner = e.latlng;
var bounds = [oneCorner, twoCorner];
L.rectangle(bounds, {color:"#ff7800", weight:1}).addTo(map);
}
But I dont think its good idea to prevent map pan. What about to use it with Ctrl, or you can use this plugin for drawing: https://github.com/Leaflet/Leaflet.draw
// EDIT:
Version with only Ctrl pressed.:
var oneCorner;
var TwoCroner;
map.on('mousedown', setOneCorner);
map.on('mouseup', setTwoCorner);
function setOneCorner(e)
{
if (e.originalEvent.ctrlKey) {
map.dragging.disable();
oneCorner = e.latlng;
}
}
function setTwoCorner(e)
{
if (e.originalEvent.ctrlKey) {
twoCorner = e.latlng;
var bounds = [oneCorner, twoCorner];
L.rectangle(bounds, {color:"#ff7800", weight:1}).addTo(map);
}
map.dragging.enable();
}

Related

Firing MouseDown events for overlapping Rectangles on a Canvas

I have a WPF window containing a Canvas which is populated with rotated Rectangles in code. The rectangles each have a MouseDown event and their positions will be distributed according to coordinates provided by the user. Often two or more will overlap, partially obstructing the rectangle beneath it.
I need the MouseDown event to fire for each rectangle that is under the mouse when it is pressed, even if that rectangle is obstructed by another rectangle, but I am only getting the MouseDown event for the topmost rectangle.
I have tried setting e.Handled for the clicked rectangle, and routing the events through the Canvas with no luck, and even gone as far as trying to locate the objects beneath the mouse based on their coordinates, but the rotation of the rectangles make that difficult to calculate.
public MainWindow()
{
InitializeComponent();
Rectangle r1 = new Rectangle() {Width = 80, Height = 120, Fill = Brushes.Blue };
r1.MouseDown += r_MouseDown;
RotateTransform rt1 = new RotateTransform(60);
r1.RenderTransform = rt1;
Canvas.SetLeft(r1, 150);
Canvas.SetTop(r1, 50);
canvas1.Children.Add(r1);
Rectangle r2 = new Rectangle() { Width = 150, Height = 50, Fill = Brushes.Green };
r2.MouseDown += r_MouseDown;
RotateTransform rt2 = new RotateTransform(15);
r2.RenderTransform = rt2;
Canvas.SetLeft(r2, 100);
Canvas.SetTop(r2, 100);
canvas1.Children.Add(r2);
}
private void r_MouseDown(object sender, MouseButtonEventArgs e)
{
Console.WriteLine("Rectangle Clicked");
}
}
There is another question that is similar to this, but it has no accepted answer and it is quite unclear as to what the final solution should be to resolve this issue. Let's see if we can be a little more clear.
First off, the solution outlined below will use the VisualTreeHelper.HitTest method in order to identify if the mouse has clicked your rectangles. The VisualTreeHelper allows us to find the rectangles even if they have moved around due to things like Canvas.SetTop and various .RenderTransform operations.
Secondly, we are going to be capturing the click event on your canvas element rather than on the individual rectangles. This allows us to handle things at the canvas level and check all the rectangles at once, as it were.
public MainWindow()
{
InitializeComponent();
//Additional rectangle for testing.
Rectangle r3 = new Rectangle() { Width = 175, Height = 80, Fill = Brushes.Goldenrod };
Canvas.SetLeft(r3, 80);
Canvas.SetTop(r3, 80);
canvas1.Children.Add(r3);
Rectangle r1 = new Rectangle() { Width = 80, Height = 120, Fill = Brushes.Blue };
RotateTransform rt1 = new RotateTransform(60);
r1.RenderTransform = rt1;
Canvas.SetLeft(r1, 100);
Canvas.SetTop(r1, 100);
canvas1.Children.Add(r1);
Rectangle r2 = new Rectangle() { Width = 150, Height = 50, Fill = Brushes.Green };
RotateTransform rt2 = new RotateTransform(15);
r2.LayoutTransform = rt2;
Canvas.SetLeft(r2, 100);
Canvas.SetTop(r2, 100);
canvas1.Children.Add(r2);
//Mouse 'click' event.
canvas1.PreviewMouseDown += canvasMouseDown;
}
//list to store the hit test results
private List<HitTestResult> hitResultsList = new List<HitTestResult>();
The HitTest method being used is the more complicated one, because the simplest version of that method only returns "the topmost" item. And by topmost, they mean the first item drawn, so it's actually visually the one on the bottom of the stack of rectangles. In order to get all of the rectangles, we need to use the complicated version of the HitTest method shown below.
private void canvasMouseDown(object sender, MouseButtonEventArgs e)
{
if (canvas1.Children.Count > 0)
{
// Retrieve the coordinates of the mouse position.
Point pt = e.GetPosition((UIElement)sender);
// Clear the contents of the list used for hit test results.
hitResultsList.Clear();
// Set up a callback to receive the hit test result enumeration.
VisualTreeHelper.HitTest(canvas1,
new HitTestFilterCallback(MyHitTestFilter),
new HitTestResultCallback(MyHitTestResult),
new PointHitTestParameters(pt));
// Perform actions on the hit test results list.
if (hitResultsList.Count > 0)
{
string msg = null;
foreach (HitTestResult htr in hitResultsList)
{
Rectangle r = (Rectangle)htr.VisualHit;
msg += r.Fill.ToString() + "\n";
}
//Message displaying the fill colors of all the rectangles
//under the mouse when it was clicked.
MessageBox.Show(msg);
}
}
}
// Filter the hit test values for each object in the enumeration.
private HitTestFilterBehavior MyHitTestFilter(DependencyObject o)
{
// Test for the object value you want to filter.
if (o.GetType() == typeof(Label))
{
// Visual object and descendants are NOT part of hit test results enumeration.
return HitTestFilterBehavior.ContinueSkipSelfAndChildren;
}
else
{
// Visual object is part of hit test results enumeration.
return HitTestFilterBehavior.Continue;
}
}
// Add the hit test result to the list of results.
private HitTestResultBehavior MyHitTestResult(HitTestResult result)
{
//Filter out the canvas object.
if (!result.VisualHit.ToString().Contains("Canvas"))
{
hitResultsList.Add(result);
}
// Set the behavior to return visuals at all z-order levels.
return HitTestResultBehavior.Continue;
}
The test example above just displays a message box showing the fill colors of all rectangles under the mouse pointer when it was clicked; verifying that VisualTreeHelper did in fact retrieve all the rectangles in the stack.

StreamGeometry PolyLineTo performance

I have created a custom shape that draws an ObservableCollection'Point using a StreamGeometry. When the signal surpasses the width of the canvas it is drawn on, it wraps around to the begining. Two polyLines are used; one from the beginning of the canvas to the last added point and one from the first added point in the current collection till the end of the canvas:
protected override Geometry DefiningGeometry
{
get
{
if (this.Points == null || !this.Points.Any())
{
return Geometry.Empty;
}
var firstGeometryPoints = this.Points.Where(p => p.Point.X >= this.Points.First().Point.X);
var secondGeometryPoints = this.Points.Except(firstGeometryPoints);
var geometry = new StreamGeometry();
using (var context = geometry.Open())
{
context.BeginFigure(firstGeometryPoints.First().Point, true, false);
context.PolyLineTo(firstGeometryPoints.Skip(1).Select(x => x.Point).ToArray(), true, true);
if (secondGeometryPoints.Any())
{
context.BeginFigure(secondGeometryPoints.First().Point, true, false);
context.PolyLineTo(secondGeometryPoints.Skip(1).Select(x => x.Point).ToArray(), true, true);
}
geometry.Freeze();
}
return geometry;
}
}
Whenever the Points collection changes I use InvalidateVisual() to force a redraw of the shape. But since data points are added at 500 samples per second, the application starts to lag. Is there another way of creating this type of graph without redrawing the entire shape on every sample?

Xamarin - iOS Multiple polygons on map

I am currently following this tutorial for adding a polygon to a map. I need to be able to add multiple polygons to my map, so I have slightly altered the code so that I can use addOverlays which takes in an array of IMKOverlay objects instead of one addOverlay which just takes in a single IMKOverlay object.
This doesn't work however... It only draws the first polygon on the map!
void addPolygonsToMap()
{
overlayList = new List<IMKOverlay>();
for (int i = 0; i < polygons.Count; i++)
{
CLLocationCoordinate2D[] coords = new CLLocationCoordinate2D[polygons[i].Count];
int index=0;
foreach (var position in polygons[i])
{
coords[index] = new CLLocationCoordinate2D(position.Latitude, position.Longitude);
index++;
}
var blockOverlay = MKPolygon.FromCoordinates(coords);
overlayList.Add(blockOverlay);
}
IMKOverlay[] imko = overlayList.ToArray();
nativeMap.AddOverlays(imko);
}
In this discussion, it would appear that I have to call a new instance of MKPolygonRenderer each time I need to add another polygon to my map, but I'm unsure how this example translates to my code. Here is my MKPolygonRenderer function:
MKOverlayRenderer GetOverlayRenderer(MKMapView mapView, IMKOverlay overlayWrapper)
{
if (polygonRenderer == null && !Equals(overlayWrapper, null)) {
var overlay = Runtime.GetNSObject(overlayWrapper.Handle) as IMKOverlay;
polygonRenderer = new MKPolygonRenderer(overlay as MKPolygon) {
FillColor = UIColor.Red,
StrokeColor = UIColor.Blue,
Alpha = 0.4f,
LineWidth = 9
};
}
return polygonRenderer;
}
Create a new renderer instance each time OverlayRenderer is called, there is no need to cache the renderer in a class level variable as the MKMapView will cache the renderers as needed.
Subclass MKMapViewDelegate:
class MyMapDelegate : MKMapViewDelegate
{
public override MKOverlayRenderer OverlayRenderer(MKMapView mapView, IMKOverlay overlay)
{
switch (overlay)
{
case MKPolygon polygon:
var prenderer = new MKPolygonRenderer(polygon)
{
FillColor = UIColor.Red,
StrokeColor = UIColor.Blue,
Alpha = 0.4f,
LineWidth = 9
};
return prenderer;
default:
throw new Exception($"Not supported: {overlay.GetType()}");
}
}
}
Instance and assign the delegate to your map:
mapDelegate = new MyMapDelegate();
map.Delegate = mapDelegate;
Note: Store the instance of your MyMapDelegate in a class level variable as you do not want to get GC'd
Update:
MKMapView has two steps involved to display an overlay on its map.
1. Calling `AddOverlay` and `AddOverlays`
First you add overlays to the map that conform to IMKOverlay. There are basic built-in types such as MKCircle, MKPolygon, etc... but you can also design your own overlays; i.e. overlays that define the location of severe weather (lightning, storm clouds, tornados, etc..). These MKOverlays describe the geo-location of the item but not how to draw it.
2. Responding to `OverlayRenderer` requests
When the display area of the map intersects with one of the overlays, the map need to draw it on the screen. The map's delegate (your MKMapViewDelegate subclass) is called to supply a MKOverlayRenderer that defines the drawing routines to paint the overlay on the map.
This drawing involves converting the geo-coordinates of the overlay to local display coordinates (helper methods are available) using Core Graphics routines (UIKit can be used with some limitations). There are basic built-in renderers for MKCircleRenderer, MKPolygonRenderer, etc.. that can be used or you can write your own MKOverlayRenderer subclass.
You could supply a custom way to renderer a MKCircle overlay, maybe a target-style red/white multi-ringed bullseye, instead of the way the default circle renderer draws it, or custom renderers that draw severe storm symbols within the bounds of a MKPolygon to match your custom severe storm overlays.
My Example code:
Since you are using MKPolygon to build your overlays, you can use the MKPolygonRenderer to display them. In my example, I provide a pattern matching switch (C# 6) that returns a semi-transparent Red/Blue MKPolygonRenderer for every MKPolygon that you added to the map (if you added a non-MKPolygon based overlay it will throw an exception).
I was also stuck in this issue and I have found the way to create the sub class of MKPolygon.
I have checked it with my example and it works like a charm. But not sure that Apple may reject my app or not.
public class CvPolyon : MKPolygon
{
public CustomObject BoundaryOption { get; }
public CvPolyon1(MKPolygon polygon, CustomObject boundaryOption)
:base(polygon.Handle)
{
BoundaryOption = boundaryOption;
}
}
We can add polygon on map like this.
var polygon = MKPolygon.FromCoordinates(coordinates);
var overlay = new CvPolyon(polygon, new CustomObject());
mapView.AddOverlay(overlay);
We can recognize our polygon in the class which extends MKMapViewDelegate like this.
public override MKOverlayRenderer OverlayRenderer(MKMapView mapView, IMKOverlay overlay)
{
if (overlay is CvPolyon polygon)
{
var polygonRenderer = new MKPolygonRenderer(polygon)
{
FillColor = polygon.BoundaryOption.AreaColor,
StrokeColor = polygon.BoundaryOption.LineColor,
Alpha = polygon.BoundaryOption.Alpha,
LineWidth = polygon.BoundaryOption.LineWidth
};
if (polygon.BoundaryOption.IsDashedLine)
polygonRenderer.LineDashPattern = new[] { new NSNumber(2), new NSNumber(5) };
return polygonRenderer;
}
return mapView.RendererForOverlay(overlay);
}

Get image coordinates from mouse cursor position on screen (WPF Image control)

I was looking for a solution to transparently add panning & zooming capability to a WPF Image control and I have found the solution https://stackoverflow.com/a/6782715/584180, developed by Wiesław Šoltés and Konrad Viltersten, which is outstanding.
Now I would like to add a 'mouse click' event to the control so that I can the coordinates of the clicked point in the original image coordinate system, so I can use them to retrieve the pixel color.
I understand there will be some rounding and if the image is zoomed out the color will not correspond to the actual one displayed on screen. I also understand that the user may click outside the image borders, in that case I expect a null Point or negative coords to be returned.
I am not an expert of the C# way of doing transforms and at the moment I am stuck with this (to be added inside the ZoomBorder.cs class):
public Point GetImageCoordsAt(MouseButtonEventArgs e)
{
if (child != null)
{
var tt = GetTranslateTransform(child);
var mousePos = e.GetPosition(this);
var transformOrigin = new Point(tt.X, tt.Y);
return mousePos; // Wrong: how do I transform this?
}
return null;
}
For future reference, what I was trying to achieve is this:
public Point GetImageCoordsAt(MouseButtonEventArgs e)
{
if (child != null && child.IsMouseOver)
{
var controlSpacePosition = e.GetPosition(child);
var imageControl = this.Child as Image;
var mainViewModel = ((MainViewModel)base.DataContext);
if (imageControl != null && imageControl.Source != null)
{
// Convert from control space to image space
var x = Math.Floor(controlSpacePosition.X * imageControl.Source.Width / imageControl.ActualWidth);
var y = Math.Floor(controlSpacePosition.Y * imageControl.Source.Height / imageControl.ActualHeight);
return new Point(x, y);
}
}
return new Point(-1, -1);
}
Which returns the coordinates of the mouse pointer in the original image system.
As mm8 suggests you can get the location you want using e.GetPosition(child);
, there's no need to perform any transformations. For testing purposes I've overwritten the reset behaviour. Using the code from the link you provided, change
void child_PreviewMouseRightButtonDown(object sender, MouseButtonEventArgs e)
{
this.Reset();
}
to
public Point GetImageCoordsAt(MouseButtonEventArgs e)
{
if (child != null && child.IsMouseOver)
{
return e.GetPosition(child);
}
return new Point(-1, -1);
}
void child_PreviewMouseRightButtonDown(object sender, MouseButtonEventArgs e)
{
MessageBox.Show(GetImageCoordsAt(e).ToString());
}
If you rightclick at the same location in the image, you'll get (approximately) the same coordinates, regardless of the pan and zoom.

Get Click event on elements inside canvas

I am working on unity project which has a 3D sphere (Rotatable and zoomable). Inside that sphere there are 121 Quads (Tiles). Sphere is visible like google earth and inside a Canvas. I want to get Click events on Quads so that I can know on which Quad user has clicked and app can perform action according to that.
Is there any way to do this? I have heard that Canvas is just like a Bitmap Image.
If you able to get particular Quard details, if (x,y) co-ordinate given then, you can archive it by attaching click event to canvas.
var elem = document.getElementById("myCanvas");
// Add event listener for click events.
elem.addEventListener('click', function (event) {
var x = event.pageX - elemLeft;
var y = event.pageY - elemTop;
// Collision detection between clicked offset and element.
quardElements.forEach(function (element) {
if (y > element.top && y < element.top + element.height
&& x > element.left && x < element.left + element.width) {
//do your stuff here
}
});
}, false);

Categories