I am struggling with making both touch events and manipulation work properly in a WPF project. I have a ScrollViewer which contains a picture and I would like to scroll both horizontally and vertically using a swipe gestures. Additionally, I would like to zoom in/out in the center of the pinch gesture. The code below achieves what I wish, but it has the following problems:
Sometimes the scrolling is laggy;
The scrolling does not work on the first try, only when attempting the same gesture a second time;
The zoom in/out does not work on the first try, only when attempting the same gesture a second time.
I enabled the IsManipulationEnabled and I implemented the code for zoom in/out functionality. However, I was not able to combine it with the scrolling functionality (by setting the PanningMode in the ScrollViewer only). Therefore, I created a custom control which inherits from Image control and I overwritten the OnTouchDown and OnTouchUp event handlers. Basically, what I am doing in these overwritten handlers is counting the number of touches on the screen and enabling/disabling manipulation. I also tried setting the PanningMode for the ScrollViewer, but it did not do the trick.
Below is the XAML:
<Grid>
<ScrollViewer
x:Name="ScrollViewerParent"
HorizontalScrollBarVisibility="Auto"
VerticalScrollBarVisibility="Auto"
PanningMode="Both">
<local:CustomImage
x:Name="MainImage"
Source="{Binding Source={x:Static local:Constants.ImagePath}}"
IsManipulationEnabled="True"
ManipulationStarting="MainImage_ManipulationStarting"
ManipulationDelta="MainImage_ManipulationDelta">
</local:CustomImage>
</ScrollViewer>
</Grid>
Here is the code-behind:
public partial class MainWindow : Window
{
private void MainImage_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
{
e.ManipulationContainer = ScrollViewerParent;
e.Handled = true;
}
private void MainImage_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
var matrix = MainImage.LayoutTransform.Value;
Point? centerOfPinch = (e.ManipulationContainer as FrameworkElement)?.TranslatePoint(e.ManipulationOrigin, ScrollViewerParent);
if (centerOfPinch == null)
{
return;
}
var deltaManipulation = e.DeltaManipulation;
matrix.ScaleAt(deltaManipulation.Scale.X, deltaManipulation.Scale.Y, centerOfPinch.Value.X, centerOfPinch.Value.Y);
MainImage.LayoutTransform = new MatrixTransform(matrix);
Point? originOfManipulation = (e.ManipulationContainer as FrameworkElement)?.TranslatePoint(e.ManipulationOrigin, MainImage);
double scrollViewerOffsetX = ScrollViewerParent.HorizontalOffset;
double scrollViewerOffsetY = ScrollViewerParent.VerticalOffset;
double pointMovedOnXOffset = originOfManipulation.Value.X - originOfManipulation.Value.X * deltaManipulation.Scale.X;
double pointMovedOnYOffset = originOfManipulation.Value.Y - originOfManipulation.Value.Y * deltaManipulation.Scale.Y;
double multiplicatorX = ScrollViewerParent.ExtentWidth / MainImage.ActualWidth;
double multiplicatorY = ScrollViewerParent.ExtentHeight / MainImage.ActualHeight;
ScrollViewerParent.ScrollToHorizontalOffset(scrollViewerOffsetX - pointMovedOnXOffset * multiplicatorX);
ScrollViewerParent.ScrollToVerticalOffset(scrollViewerOffsetY - pointMovedOnYOffset * multiplicatorY);
e.Handled = true;
}
}
The XAML for the custom control:
<Style TargetType="{x:Type local:CustomImage}" />
Here is where I override the OnTouchDown and OnTouchUp event handlers:
public class CustomImage : Image
{
private volatile int nrOfTouchPoints;
private volatile bool isManipulationReset;
private object mutex = new object();
static CustomImage()
{
DefaultStyleKeyProperty.OverrideMetadata(typeof(CustomImage), new FrameworkPropertyMetadata(typeof(CustomImage)));
}
protected override void OnTouchDown(TouchEventArgs e)
{
lock (mutex)
{
nrOfTouchPoints++;
if (nrOfTouchPoints >= 2)
{
IsManipulationEnabled = true;
isManipulationReset = false;
}
}
base.OnTouchDown(e);
}
protected override void OnTouchUp(TouchEventArgs e)
{
lock (mutex)
{
if (!isManipulationReset)
{
IsManipulationEnabled = false;
isManipulationReset = true;
nrOfTouchPoints = 0;
}
}
base.OnTouchUp(e);
}
}
What I expect from this code is the following:
When using one finger to swipe horizontally or vertically across the touchscreen, the image should be scrolled accordingly;
When I use a pinch gesture on the touch screen, the image should be zoomed in/out in the center of the pinch.
Fortunately, I managed to find the perfect solution. Therefore, I am going to post the answer in the case that someone is working on a similar problem and needs some help.
What I did:
Got rid of the custom control as it was not necessary;
Create a field which counts the number of the touch points;
Implemented the TouchDown event handler, which increases the number of touch points by 1 (this method is called each time there is a touch down gesture on the device);
Implemented the TouchUp event handler, which decreases the number of touch points by 1 (this method is called each time there is a touch up gesture on the device);
In the Image_ManipulationDelta event handler, I check the number of touch points:
if the number of touch points < 2, then the translation value is added to the current offset of the scrollbars, thus achieving scrolling;
otherwise, the center of the pinch is calculated and a scale gesture is applied.
Here is the full XAML:
<Grid
x:Name="GridParent">
<ScrollViewer
x:Name="ScrollViewerParent"
HorizontalScrollBarVisibility="Auto"
VerticalScrollBarVisibility="Auto"
PanningMode="Both">
<Image
x:Name="MainImage"
Source="{Binding Source={x:Static local:Constants.ImagePath}}"
IsManipulationEnabled="True"
TouchDown="MainImage_TouchDown"
TouchUp="MainImage_TouchUp"
ManipulationDelta="Image_ManipulationDelta"
ManipulationStarting="Image_ManipulationStarting"/>
</ScrollViewer>
</Grid>
Here is the entire code discussed above:
public partial class MainWindow : Window
{
private volatile int nrOfTouchPoints;
private object mutex = new object();
public MainWindow()
{
InitializeComponent();
DataContext = this;
}
private void Image_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
{
e.ManipulationContainer = ScrollViewerParent;
e.Handled = true;
}
private void Image_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
int nrOfPoints = 0;
lock (mutex)
{
nrOfPoints = nrOfTouchPoints;
}
if (nrOfPoints >= 2)
{
DataLogger.LogActionDescription($"Executed {nameof(Image_ManipulationDelta)}");
var matrix = MainImage.LayoutTransform.Value;
Point? centerOfPinch = (e.ManipulationContainer as FrameworkElement)?.TranslatePoint(e.ManipulationOrigin, ScrollViewerParent);
if (centerOfPinch == null)
{
return;
}
var deltaManipulation = e.DeltaManipulation;
matrix.ScaleAt(deltaManipulation.Scale.X, deltaManipulation.Scale.Y, centerOfPinch.Value.X, centerOfPinch.Value.Y);
MainImage.LayoutTransform = new MatrixTransform(matrix);
Point? originOfManipulation = (e.ManipulationContainer as FrameworkElement)?.TranslatePoint(e.ManipulationOrigin, MainImage);
double scrollViewerOffsetX = ScrollViewerParent.HorizontalOffset;
double scrollViewerOffsetY = ScrollViewerParent.VerticalOffset;
double pointMovedOnXOffset = originOfManipulation.Value.X - originOfManipulation.Value.X * deltaManipulation.Scale.X;
double pointMovedOnYOffset = originOfManipulation.Value.Y - originOfManipulation.Value.Y * deltaManipulation.Scale.Y;
double multiplicatorX = ScrollViewerParent.ExtentWidth / MainImage.ActualWidth;
double multiplicatorY = ScrollViewerParent.ExtentHeight / MainImage.ActualHeight;
ScrollViewerParent.ScrollToHorizontalOffset(scrollViewerOffsetX - pointMovedOnXOffset * multiplicatorX);
ScrollViewerParent.ScrollToVerticalOffset(scrollViewerOffsetY - pointMovedOnYOffset * multiplicatorY);
e.Handled = true;
}
else
{
ScrollViewerParent.ScrollToHorizontalOffset(ScrollViewerParent.HorizontalOffset - e.DeltaManipulation.Translation.X);
ScrollViewerParent.ScrollToVerticalOffset(ScrollViewerParent.VerticalOffset - e.DeltaManipulation.Translation.Y);
}
}
private void MainImage_TouchDown(object sender, TouchEventArgs e)
{
lock (mutex)
{
nrOfTouchPoints++;
}
}
private void MainImage_TouchUp(object sender, TouchEventArgs e)
{
lock (mutex)
{
nrOfTouchPoints--;
}
}
}
}
Related
I have the following code:
<Window x:Class="Demo.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
Title="MainWindow" Height="450" Width="800">
<Canvas Name="Canvas_Main" />
</Window>
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
Loaded += MainWindow_Loaded;
}
private void MainWindow_Loaded(Object sender, RoutedEventArgs e)
{
Rectangle lastRectangle = null;
Random random = new Random(0);
for (Int32 counter = 0; counter < 5; counter++)
{
Rectangle rectangle = new Rectangle();
rectangle.Fill = Brushes.Blue;
rectangle.Width = random.Next(100, 200);
rectangle.Height = counter * 100;
Canvas_Main.Children.Add(rectangle);
if (lastRectangle == null) {
Canvas.SetLeft(rectangle, 0);
Canvas.SetTop(rectangle, 0);
}
else
{
Canvas.SetLeft(rectangle, lastRectangle.ActualWidth);
Canvas.SetTop(rectangle, 0);
}
lastRectangle = rectangle;
}
}
}
This isn't working as expected (laying each rectangle diagonally next to each other), as lastRectangle.ActualWidth is 0. As I understand things from this answer, it is because lastRectangle has not been measured and arranged.
I am curious, at what point would the measuring and arranging be done, if not when added to a container that is already visible and loaded?
The Framework.Loaded event of an element is raised after the measure and arrange layout pass, but before the rendering of the element.
The complete layout pass is initialized when UIElement.InvalidateMeasure for an asynchronous layout pass or UIElement.UpdateLayout for a synchronous layout pass was invoked on the element.
In your scenario the Window.Loadedevent handler is invoked, which means that the Window and the Canvas are both loaded (but not rendered).
Now you start to add new UIElement elements to the Canvas.
Canvas.Children.Add should invoke the InvalidateMeasure method. Because InvalidateMeasure triggers an asynchronous layout pass, the Canvas and therefore the current child element will be enqueued into the layout queue and the context continues execution (adding more rectangles).
Because there is already a pending layout pass due to the freshly added element, you should avoid calling UIElement.Measure manually (this are recursive calls and quite expensive when considering performance).
Once the context has completed, the elements in the queue, that are waiting for their layout pass and final rendering, will be handled and MeasureOverride and ArrangeOverride are invoked recursively on those elements (Canvas and its children).
As a result, UIElement.RenderSize can be calculated by the layout system.
At this moment, the new FrameworkElement.ActualWidth will be available.
This is the moment the FrameworkElement.Loaded event of the added elements (the rectangles) is finally raised.
To solve your problem you have to either use Rectangle.Width instead
or wait for each Rectangle.Laoded event before adding the next:
private int ShapeCount { get; set; }
private const int MaxShapes = 5;
private Point ShapeBPosition { get; set; }
private void MainWindow_Loaded(Object sender, RoutedEventArgs e)
{
this.ShapePosition = new Point();
AddRectangle(this.ShapePosition);
}
private void AddRectangle(Point position)
{
Random random = new Random();
Rectangle rectangle = new Rectangle();
rectangle.Fill = Brushes.Blue;
rectangle.Width = random.Next(100, 200);
rectangle.Height = ++this.ShapeCount * 100;
Canvas_Main.Children.Add(rectangle);
Canvas.SetLeft(rectangle, position.X);
Canvas.SetTop(rectangle, position.Y);
rectangle.Loaded += OnRectangleLoaded;
}
private void OnRectangleLoaded(object sender, RoutedEventArgs e)
{
var rectangle = sender as Rectangle;
rectangle.Loaded -= OnRectangleLoaded;
if (this.ShapeCount == MainWindow.MaxShapes)
{
return;
}
// this.ShapePosition is struct => modify copy
Point position = this.ShapePosition;
position.Offset(rectangle.ActualWidth, 0);
this.ShapePosition = position;
AddRectangle(this.ShapePosition);
}
The measuring and arranging has been completed for the window. But you are now creating new child controls, which will ordinarily not be measured/arranged until the window's next layout pass.
You can force this to happen immediately by calling the Measure method of each rectangle at the end of the loop.
I'm attempting to emulate the behavior of Windows 10's Virtual Touchpad, ie when a user touches a control inside the app and moves their finger around, the system cursor will mirror their movement. However, I notice that there seems to be some conflict between the system processing touch and mouse inputs simultaneously, where the touch input wants to hide the cursor, and the mouse input wants to show the cursor.
I started by building a boilerplate UWP App in VS-2019, here's my MainPage.xaml, where the only thing I'm changing is giving my Grid the Name="Touchpad" property so I can track pointer events within it:
<Page
x:Class="Playground.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:Playground"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<Grid Background="Black" Name="Touchpad">
</Grid>
</Page>
I'm using Windows.UI.Input.Preview.Injection to move the mouse cursor. Because this is a Restricted Capability I made changes to my project as defined here: https://learn.microsoft.com/en-us/uwp/api/windows.ui.input.preview.injection#remarks
public sealed partial class MainPage : Page
{
private Point lastPosition;
private InputInjector inputInjector = InputInjector.TryCreate();
public MainPage()
{
this.InitializeComponent();
ApplicationView.PreferredLaunchViewSize = new Size(480, 480);
ApplicationView.PreferredLaunchWindowingMode = ApplicationViewWindowingMode.PreferredLaunchViewSize;
Touchpad.PointerPressed += new PointerEventHandler(Touchpad_PointerPressed);
Touchpad.PointerReleased += new PointerEventHandler(Touchpad_PointerReleased);
Touchpad.PointerMoved += new PointerEventHandler(Touchpad_PointerMoved);
}
private void Touchpad_PointerMoved(object sender, PointerRoutedEventArgs e)
{
e.Handled = true;
PointerPoint pointer = e.GetCurrentPoint(Touchpad);
Point currentPosition = pointer.Position;
if (pointer.PointerDevice.PointerDeviceType == Windows.Devices.Input.PointerDeviceType.Touch &&
lastPosition != currentPosition &&
pointer.Properties.IsPrimary == true)
{
Point delta = new Point(currentPosition.X - lastPosition.X, currentPosition.Y - lastPosition.Y);
InjectedInputMouseInfo mouseInfo = new InjectedInputMouseInfo();
mouseInfo.MouseOptions = InjectedInputMouseOptions.Move;
mouseInfo.DeltaX = (int)delta.X;
mouseInfo.DeltaY = (int)delta.Y;
if (inputInjector != null)
{
inputInjector.InjectMouseInput(new[] { mouseInfo });
}
lastPosition = currentPosition;
}
}
private void Touchpad_PointerReleased(object sender, PointerRoutedEventArgs e)
{
e.Handled = true;
lastPosition = new Point(0,0);
}
private void Touchpad_PointerPressed(object sender, PointerRoutedEventArgs e)
{
e.Handled = true;
PointerPoint pointer = e.GetCurrentPoint(Touchpad);
if (pointer.PointerDevice.PointerDeviceType == Windows.Devices.Input.PointerDeviceType.Touch &&
pointer.Properties.IsPrimary == true)
{
lastPosition = pointer.Position;
Window.Current.CoreWindow.PointerCursor = null;
}
}
}
I noticed that while moving my finger around, there seems to be two cursors (or one that's bouncing back and forth really fast) - one in the app itself, and one in the system/desktop. I added Window.Current.CoreWindow.PointerCursor = null; which successfully hides the cursor inside the app, but the one that's being driven by the injection is blinking/flickering.
I feel like I'm missing something, as the feedback on the Virtual Touchpad in Win10 is buttery smooth and is also a UWP app, as far as I know...
I have a canvas which has a zooming function. There are a lot of elements inside of it so, for selection I was going to use a selection box which I create dynamically when selection box is clicked. On clicking the button, I add the rectangle to the canvas and again, on clicking the button, I remove it.
I have the following xaml code:
<Viewbox x:Name="vbCanvas">
<Grid x:Name="theGrid"
MouseDown="Grid_MouseDown"
MouseUp="Grid_MouseUp"
MouseMove="Grid_MouseMove"
Background="Transparent">
<Canvas Name="canvasWaSNA" Margin="0,10,10,10" Height="720" Width="1280">
</Canvas>
</Grid>
</Viewbox>
mouse events of theGrid draws the rectangle on runtime on the canvas. The codes for those events are:
bool mouseDown = false;
Point mouseDownPos;
Point mouseUpPos;
private void Grid_MouseDown(object sender, MouseButtonEventArgs e)
{
mouseDown = true;
mouseDownPos = e.GetPosition(theGrid);
theGrid.CaptureMouse();
// Initial placement of the drag selection box.
Canvas.SetLeft(sBox, mouseDownPos.X);
Canvas.SetTop(sBox, mouseDownPos.Y);
sBox.Width = 0;
sBox.Height = 0;
// Make the drag selection box visible.
sBox.Visibility = Visibility.Visible;
}
}
private void Grid_MouseUp(object sender, MouseButtonEventArgs e)
{
// Release the mouse capture and stop tracking it.
mouseDown = false;
mouseUpPos = e.GetPosition(theGrid);
theGrid.ReleaseMouseCapture();
// Show the drag selection box.
sBox.Visibility = Visibility.Visible;
MessageBox.Show(mouseDownPos.ToString() + " " + mouseUpPos.ToString());
}
private void Grid_MouseMove(object sender, MouseEventArgs e)
{
if (mouseDown)
{
// When the mouse is held down, reposition the drag selection box.
Point mousePos = e.GetPosition(theGrid);
if (mouseDownPos.X < mousePos.X)
{
Canvas.SetLeft(sBox, mouseDownPos.X);
sBox.Width = mousePos.X - mouseDownPos.X;
}
else
{
Canvas.SetLeft(sBox, mousePos.X);
sBox.Width = mouseDownPos.X - mousePos.X;
}
if (mouseDownPos.Y < mousePos.Y)
{
Canvas.SetTop(sBox, mouseDownPos.Y);
sBox.Height = mousePos.Y - mouseDownPos.Y;
}
else
{
Canvas.SetTop(sBox, mousePos.Y);
sBox.Height = mouseDownPos.Y - mousePos.Y;
}
}
}
To create a Rectangle at runtime, I have to click a button. The event of that button is as follows:
private void select_Click_1(object sender, RoutedEventArgs e)
{
if (!canvasWaSNA.Children.Contains(sBox))
{
sBox.Name = "selectionBox";
sBox.StrokeThickness = 1.5 / zoomfactor;
sBox.StrokeDashArray = new DoubleCollection { 1, 2 };
sBox.Visibility = System.Windows.Visibility.Collapsed;
sBox.Stroke = Brushes.Gray;
canvasWaSNA.Children.Add(sBox);
}
else
{
sBox.Visibility = System.Windows.Visibility.Collapsed;
canvasWaSNA.Children.Remove(sBox);
}
}
I am using the following code to zoom into the canvas:
double zoomfactor = 1.0;
void window_MouseWheel(object sender, MouseWheelEventArgs e)
{
Point p = e.MouseDevice.GetPosition(canvasWaSNA); //gets the location of the canvas at which the mouse is pointed
Matrix m = canvasWaSNA.RenderTransform.Value;
if (e.Delta > 0)
{ //the amount of wheel of mouse changed. e.Delta holds int value.. +ve for uproll and -ve for downroll
m.ScaleAtPrepend(1.1, 1.1, p.X, p.Y);
zoomfactor *= 1.1;
}
else
{
m.ScaleAtPrepend(1 / 1.1, 1 / 1.1, p.X, p.Y);
zoomfactor /= 1.1;
}
canvasWaSNA.RenderTransform = new MatrixTransform(m);
}
When my canvas is on original size, The rectangle is drawn perfectly but as I zoom in or zoom out, rectangle is drawn abnormally. It starts to draw from other points. What might be the problem? Please help
Well, I wasnot supposed to capture the mouse position with respect to theGrid, as I had to create the rectangle with respect to canvas. So, I have to get the position as e.GetPosition(canvasWaSNA) and the intended result was shown. It captured the mouse position on the canvas. Now, the rectangle is drawn perfectly even when zoomed in or zoomed out.
Also, I improved the StrokeThickness of the rectangle drawn by referencing it with the zoomfactor of the canvas.
private void Grid_MouseDown(object sender, MouseButtonEventArgs e)
{mouseDown = true;
mouseDownPos = e.GetPosition(canvasWaSNA);
theGrid.CaptureMouse();
sBox.StrokeThickness = 1.5 / zoomfactor;
// Initial placement of the drag selection box.
Canvas.SetLeft(sBox, mouseDownPos.X);
Canvas.SetTop(sBox, mouseDownPos.Y);
sBox.Width = 0;
sBox.Height = 0;
// Make the drag selection box visible.
sBox.Visibility = Visibility.Visible;
}
Being new to WPF and MVVM I searched everywhere to find a good answer to my problem. I'm creating a cropping application but I'm trying to migrate the code behind codes to a view model. I was able to bind my mouse button event by using blends interactivity triggers code is below:
<Grid x:Name="GridLoadedImage" HorizontalAlignment="Left" VerticalAlignment="Top">
<i:Interaction.Triggers>
<i:EventTrigger EventName="MouseLeftButtonDown">
<i:InvokeCommandAction Command="{Binding MouseLeftButtonDownCommand}"/>
</i:EventTrigger>
<i:EventTrigger EventName="MouseLeftButtonUp">
<i:InvokeCommandAction Command="{Binding MouseLeftButtonUpCommand}"/>
</i:EventTrigger>
<i:EventTrigger EventName="MouseMove">
<i:InvokeCommandAction Command="{Binding MouseMoveCommand}"/>
</i:EventTrigger>
</i:Interaction.Triggers>
<Grid.LayoutTransform>
<ScaleTransform ScaleX="{Binding ElementName=slider1, Path=Value}" ScaleY="{Binding ElementName=slider1, Path=Value}"/>
</Grid.LayoutTransform>
<Image x:Name="LoadedImage" Margin="10" Source="{Binding ImagePath}"/>
<Canvas x:Name="BackPanel" Margin="10">
<Rectangle x:Name="selectionRectangle" Stroke="LightBlue" Fill="#220000FF" Visibility="Collapsed"/>
</Canvas>
</Grid>
Now my dilema is how to migrate the actual code i used from my code behind which is shown below:
private void LoadedImage_MouseLeftButtonDown(object sender, MouseButtonEventArgs e)
{
if (isDragging == false)
{
anchorPoint.X = e.GetPosition(BackPanel).X;
anchorPoint.Y = e.GetPosition(BackPanel).Y;
Canvas.SetZIndex(selectionRectangle, BackPanel.Children.Count);
isDragging = true;
BackPanel.Cursor = Cursors.Cross;
}
}
private void LoadedImage_MouseMove(object sender, MouseEventArgs e)
{
if (isDragging)
{
double x = e.GetPosition(BackPanel).X;
double y = e.GetPosition(BackPanel).Y;
selectionRectangle.SetValue(Canvas.LeftProperty, Math.Min(x, anchorPoint.X));
selectionRectangle.SetValue(Canvas.TopProperty, Math.Min(y, anchorPoint.Y));
selectionRectangle.Width = Math.Abs(x - anchorPoint.X);
selectionRectangle.Height = Math.Abs(y - anchorPoint.Y);
if (selectionRectangle.Visibility != Visibility.Visible)
selectionRectangle.Visibility = Visibility.Visible;
}
}
private void LoadedImage_MouseLeftButtonUp(object sender, MouseButtonEventArgs e)
{
if (isDragging)
{
isDragging = false;
if (selectionRectangle.Width > 0)
{
Crop.IsEnabled = true;
Cut.IsEnabled = true;
BackPanel.Cursor = Cursors.Arrow;
}
}
}
As you can see I would need to be able to access the x and y coordinates as well as the width and height of the rectangle (which is named selection rectangle). I was thinking of creating the canvas and rectangle inside my viewmodel but that would be against the mvvm structure. I read that I could use attached properties but not familiar with it. What is the best possible way of handling this with respect to MVVM pattern. Currently I'm reading a book by Adan Nathan WPF unleashed 4 which is a great book for beginners like me but i cant seem to find anything that relates to my problem. Thanks for any help
I do have a view model code for my mouse event:
#region MouseLeftButtonDown
private bool isDragging = false;
private Point anchorPoint = new Point();
private ICommand _mouseLeftButtonDownCommand;
public ICommand MouseLeftButtonDownCommand
{
get
{
if (_mouseLeftButtonDownCommand == null)
{
_mouseLeftButtonDownCommand = new RelayCommand(param => MouseLeftButtonDown());
}
return _mouseLeftButtonDownCommand;
}
}
public void MouseLeftButtonDown()
{
if (isDragging == false)
{
MessageBox.Show("THis is Mouse Down");
//anchorPoint.X = e.GetPosition(BackPanel).X;
//anchorPoint.Y = e.GetPosition(BackPanel).Y;
isDragging = true;
}
}
#endregion
#region MouseLeftButtonUp
private ICommand _mouseLeftButtonUpCommand;
public ICommand MouseLeftButtonUpCommand
{
get
{
if (_mouseLeftButtonUpCommand == null)
{
_mouseLeftButtonUpCommand = new RelayCommand(param => MouseLeftButtonUp((MouseButtonEventArgs)param));
}
return _mouseLeftButtonUpCommand;
}
}
public void MouseLeftButtonUp(MouseButtonEventArgs e)
{
if (isDragging)
{
MessageBox.Show(e.Source.ToString());
isDragging = false;
//if (selectionRectangle.Width > 0)
//{
// Crop.IsEnabled = true;
// Cut.IsEnabled = true;
// BackPanel.Cursor = Cursors.Arrow;
//}
}
}
#endregion
#region MouseMove
private ICommand _mouseMoveCommand;
public ICommand MouseMoveCommand
{
get
{
if (_mouseMoveCommand == null)
{
_mouseMoveCommand = new RelayCommand(param => MouseMove());
}
return _mouseMoveCommand;
}
}
public void MouseMove()
{
if (isDragging)
{
//MessageBox.Show("THis is Mouse Move");
//double x = e.GetPosition(BackPanel).X;
//double y = e.GetPosition(BackPanel).Y;
//selectionRectangle.SetValue(Canvas.LeftProperty, Math.Min(x, anchorPoint.X));
//selectionRectangle.SetValue(Canvas.TopProperty, Math.Min(y, anchorPoint.Y));
//selectionRectangle.Width = Math.Abs(x - anchorPoint.X);
//selectionRectangle.Height = Math.Abs(y - anchorPoint.Y);
//if (selectionRectangle.Visibility != Visibility.Visible)
// selectionRectangle.Visibility = Visibility.Visible;
}
}
#endregion
I just commented the actual code and replace it with message boxes just to test if my trigger work which it does. This 3 functions once I figure out how to make it work would draw the cropping rectangle on top of the imaged being cropped. I do have a crop button the will be enabled once the rectangle is completed and this button will be bound to another function that would be the actual cropping function.
That's more simple than you may have thought.
What you are doing is an UserControl which userdefined behaviour. So rather than putting that XAML into your Page/View, you implement your own Control which derives from UserControl and implement your code as you have in your code-behind.
Since you are making a custom control, you don't have to follow MVVM for it. In fact, MVVM patter for user controls is discouraged. In your custom control you define may define a Dependency Property which holds an Object of type "SelectionRect" (you shouldn't be using Rect as it's a struct and it doesn't work well with databinding, as it creates a new copy of it each time it changes).
public class CropControl : UserControl
{
public Rect Selection
{
get { return (Rect)GetValue(SelectionProperty); }
set { SetValue(SelectionProperty, value); }
}
public static readonly DependencyProperty SelectionProperty =
DependencyProperty.Register("Selection", typeof(Rect), typeof(CropControl), new PropertyMetadata(default(Rect)));
// this is used, to react on changes from ViewModel. If you assign a
// new Rect in your ViewModel you will have to redraw your Rect here
private static void OnSelectionChanged(System.Windows.DependencyObject d, System.Windows.DependencyPropertyChangedEventArgs e)
{
Rect newRect = (Rect)e.NewValue;
Rectangle selectionRectangle = d as Rectangle;
if(selectionRectangle!=null)
return;
selectionRectangle.SetValue(Canvas.LeftProperty, newRect.X);
selectionRectangle.SetValue(Canvas.TopProperty, newRect.Y);
selectionRectangle.Width = newRect.Width;
selectionRectangle.Height = newRect.Height;
}
private void LoadedImage_MouseLeftButtonDown(object sender, MouseButtonEventArgs e)
{
if (isDragging == false)
{
anchorPoint.X = e.GetPosition(BackPanel).X;
anchorPoint.Y = e.GetPosition(BackPanel).Y;
Canvas.SetZIndex(selectionRectangle, BackPanel.Children.Count);
isDragging = true;
BackPanel.Cursor = Cursors.Cross;
}
}
private void LoadedImage_MouseMove(object sender, MouseEventArgs e)
{
if (isDragging)
{
double x = e.GetPosition(BackPanel).X;
double y = e.GetPosition(BackPanel).Y;
selectionRectangle.SetValue(Canvas.LeftProperty, Math.Min(x, anchorPoint.X));
selectionRectangle.SetValue(Canvas.TopProperty, Math.Min(y, anchorPoint.Y));
selectionRectangle.Width = Math.Abs(x - anchorPoint.X);
selectionRectangle.Height = Math.Abs(y - anchorPoint.Y);
if (selectionRectangle.Visibility != Visibility.Visible)
selectionRectangle.Visibility = Visibility.Visible;
}
}
private void LoadedImage_MouseLeftButtonUp(object sender, MouseButtonEventArgs e)
{
if (isDragging)
{
isDragging = false;
if (selectionRectangle.Width > 0)
{
Crop.IsEnabled = true;
Cut.IsEnabled = true;
BackPanel.Cursor = Cursors.Arrow;
}
// Set the Selection to the new rect, when the mouse button has been released
Selection = new Rect(
selectionRectangle.GetValue(Canvas.LeftProperty),
selectionRectangle.GetValue(Canvas.TopProperty),
selectionRectangle.Width,
selectionRectangle.Height);
}
}
}
Notice the only changes were to add Selection = new Rect(...) and the Dependency Property.
Then you can bind it in XAML.
<my:CropControl Selection="{Binding Selection,Mode=TwoWay}"/>
Update:
Your ViewModel would look something like
public class MyViewModel : ViewModel
{
private Rect selection;
public Rect Selection
{
get
{
return selection;
}
set
{
selection = value;
// Or whatever the name of your framework/implementation the method is called
OnPropertyChanged("Selection");
// Cause ICommands to reevaluate their CanExecute methods
CommandManager.InvalidateRequerySuggested();
}
}
private ICommand cropCommand;
public ICommand CropCommand {
get
{
if(cropCommand==null)
cropCommand = new RelayCommand(Crop, () => Selection.Width > 0); // only allow execution when Selection width > 0
return cropCommand;
}
}
public void Crop()
{
// Get a copy of the selection in case it changes during execution
Rect cropSelection = Selection;
// use it to crop your image
...
}
}
Drawing Selection = View Logic (So View)
Cropping with a Rect given by CropControl => Presentation/Business Logic (so ViewModel)
Doing so, allows you to reuse your CropControl in other applications. If you put your "selectionRect" drawing code into your ViewModel (which may be possible, but causes hard to read and maintain code), then you can't reuse it in other application, since your ViewModels are specific to your application.
Hope that helps.
MVVM means separating View from ViewModel. The example you give is typically a View only code.
Your example seems to be a kind of selection tool, I deduce you want to get the selected content back, or at least the cropping coordinates. So the best is to transform your code in a custom control exposing a Rect DependencyProperty for the crop coordinates, and in your view model, you should expose a Rect property holding the cropping rectangle coordinates, and then Bind it to your cropping control DepencyProperty.
The view is about interacting with visual aspects. The ViewModel is about holding and working with the data used by the view.
How do I get the pinch-to-zoom x and y scaling values independent of each other for a Windows Store App? I'm currently using ManipulationDeltaRoutedEventArgs's ManipulationDelta structure, but as you can see it only offers a single scale.
// Global Transform used to change the position of the Rectangle.
private TranslateTransform dragTranslation;
private ScaleTransform scaleTransform;
// Constructor
public MainPage()
{
InitializeComponent();
// Add handler for the ManipulationDelta event
TestRectangle.ManipulationDelta += Drag_ManipulationDelta;
dragTranslation = new TranslateTransform();
scaleTransform = new ScaleTransform();
TestRectangle.RenderTransform = this.dragTranslation;
}
void Drag_ManipulationDelta(object sender, ManipulationDeltaRoutedEventArgs e)
{
// Move the rectangle.
dragTranslation.X += e.Delta.Translation.X;
dragTranslation.Y += e.Delta.Translation.Y;
// Scaling, but I want X and Y independent!
scaleTransform.ScaleX = e.Delta.Scale;
scaleTransform.ScaleY = e.Delta.Scale;
}
XAML:
<Rectangle Name="TestRectangle"
Width="200" Height="200" Fill="Blue"
ManipulationMode="All"/>
Code mostly taken from here.
I ended up using Handling Two, Three, Four Fingers Swipe Gestures in WinRT App to grab the coordinates of the two fingers, calculated the initial difference between them, and then scaled accordingly as the distance changed.
int numActiveContacts;
Dictionary<uint, int> contacts;
List<PointF> locationsOfSortedTouches;
void myCanvas_PointerPressed(object sender, PointerRoutedEventArgs e) {
PointerPoint pt = e.GetCurrentPoint(myCanvas);
locationsOfSortedTouches.Add(new PointF((float) pt.Position.X, (float) pt.Position.Y));
touchHandler.TouchesBegan(locationsOfSortedTouches);
contacts[pt.PointerId] = numActiveContacts;
++numActiveContacts;
e.Handled = true;
}
void myCanvas_PointerMoved(object sender, PointerRoutedEventArgs e) {
var pt = e.GetCurrentPoint(myCanvas);
var ptrId = pt.PointerId;
if (contacts.ContainsKey(ptrId)) {
var ptrOrdinal = contacts[ptrId];
Windows.Foundation.Point currentContact = pt.Position;
locationsOfSortedTouches[ptrOrdinal] = new PointF((float) pt.Position.X, (float) pt.Position.Y);
//distance calculation and zoom redraw here
}
e.Handled = true;
}