What does TouchDevice.Capture() actually do in WPF? - c#

I can't seem to find any real documentation on what TouchDevice.Capture() really does. When and / or how should I use it? Or where can I read more about it?

It captures touch input for the specified IInputElement, in the same way as mouse input is captured (see the Remarks in MouseDevice.Capture).
When touch input is captured, the element continues to receive touch events even if the touch position lies outside the hit test area of the element.
You may try the example code in TouchDevice with and without capture and observe the different behaviour.

Related

Unexpected output from checking if mouse within control

I am implementing a custom drag and drop interface with winForm Buttons and after viewing several solutions on how to obtain mouse position and check it against control bound have not been able to get it to work.
I have tried:
button.ClientRectangle.Contains(PointToClient(Cursor.Position))
and
button.ClientRectangle.Contains(PointToClient(Control.MousePosition))
Both of these have failed to work. Checking the mouse bounds seem like a simple operation, but I really am stumped.
My speculation of the unexpected values are:
Process of obtaining cursor position may be in wrong corner of cursor image
Method/Function does not work on Buttons for some reason
You are using the wrong object reference, calculating the mouse position relative to the form instead of the button. And you are writing it in a way that make this very hard to debug. Fix:
var pos = button.PointToClient(Cursor.Position);
System.Diagnostics.Debug.WriteLine(pos); // Now it is easy
if (button.ClientRectangle.Contains(pos)) {
// etc...
}

MonoGame - Working With Specific Gestures for Different Purposes

Part of my particular dilemma is that I would like to be able to get the initial position of a drag gesture. Depending on the initial position of that drag gesture, the application would either 1) pan the view or 2) display a menu, but not perform these gestures at the same time (which is where part of struggle lies).
For example, if the user initiated a drag from the very left side of their screen and dragged inwards, a menu would pop in instead of the view panning.
I would also like to be able to execute a double tap gesture without also activating a tap gesture, if that's at all possible. I've tried working with boolean flags - for example,
// ...
if (gesture.GestureType == GestureType.DoubleTap)
{
isDoubleTap == true;
}
// ...
public static Vector2 Tap
{
get
{
if (!isDoubleTap)
return gesture.Position;
// ...
But that doesn't work.
I've tried using TouchCollection - if anyone would like me to elaborate on my issues with that I can, but for now I'll say what I tried hasn't worked. It's entirely possible I may have just goofed as I am a novice when it comes to working with touch input.
I've been working on this for a few days and have done as much searching as I can, and nothing I've found has alleviated my issue - if I happened to have missed something, I apologize.
Thank you for your time!
Concerning start position of a drag:
There is a gesture for a drag ending, so if you receive a drag and its the first one since the last drag ended, thats the initial position.
Concerning tap/doubletap:
MonoGame works the same way as XNA as documented here:
The user tapped the screen twice in quick succession. This
always is preceded by a Tap gesture.
This sounds like a input-bindings design problem more than a technical question imo. Consider also what you can move to instead occur on press or release rather than only making use of gestures.

Pattern tracing app for Windows store

I'm currently working on an educational Windows store app in which the user will be able to trace over letters (e.g., A, B, C, but not limited to English) to learn the basics of writing.
How can I detect input and then compare it to an image mask of a letter using C# and XAML?
To do this you will need to have some way to rasterize your text that you want the user to trace over. Then, in order to provide feedback on whether they traced it correctly you'll need to continuously listen for the draw event and compare the input to what they should be drawing.
Basically, if a user draws a certain path or set of paths on a canvas, you'll want to be able to provide instant feedback on if they got it right yet. To give you some direction for this, I recommend you read this answer on SO, which roughly describes how to capture input and draw it on a canvas.
From there you should be thinking in terms of matching the user's input to an image of the letter they're supposed to be drawing. This requires some amount of image matching. To get you started, I recommend reading through all the answers to this post on SO.
Since you seem to be lacking direction in general, here's an idea of how your program could be structured:
Load the current letter to be drawn, and make sure to perform the appropriate calculations to pre-determine as much as possible for comparing to the input. Based on the second link above, this means you should call GetPixel for the letter to be drawn before the user is allowed to start tracing it (also note that you may want to downscale the image for better performance). You will also need to decide what your match threshold will be. Try starting with something like 70%.
Capture the user's input on a canvas, as explained in the first link. You'll probably want to adjust the brush width, but that post is a great start.
In the MouseMove event, you will also want to occasionally check to see how well it matches with the letter they're supposed to be drawing.
Once the user's input is within your match threshold, move on to the next letter. You may want to consider providing them with how well they did, based on the match percentage.
Experiment with values such as the brush width, image resolution, and how often you compare the input to the letter. Also try to do as much of the image processing as you can while displaying a 'loading' prompt when moving to the next letter to be drawn.

PolygonGeometry with offset?Or is there something off with the Mouse events?

I came across the weirdest situation while code-generating polygons and attaching them to a Virtual Earth 3D Globe Control. I have enabled mouse controls, as discussed on this thread in codeplex: http://bingmapswpf.codeplex.com/discussions/279548
Context: Map with several polygonGeometries, some of them are intentionally overlayed (using z-index).
*strong text*Actions: Click over a given polygonGeometry or trigger MouseEnter/Leave event over a given polygonGeometry.
Result: The object isn't detected by the click or by the MouseEnter/Leave event, however if I apply some "offset" to my clicks/hovers over the PolygonGeometry, the events pickup the object on a "empty space" in the map, a couple of pixels away for the actual object.
Aditional info: This behaviour goes away completely if I zoom in the object (almost to a full screen size), and starts getting worse as I zoom out. In High-level views of the map/objects it's impossible to click or hover any objects, or at least they don't get picked up by the events.
So, right now my theory is that by some reason, in lower zoom scenarios, the map "missplaces" the polygonGeometrys (although it seems that they are drawn properly). Being the object missplaced, no shapeId/layerId are detected and thus no action over the event is triggered.
So, I would like to know if someone has already came across this situation and how it was fixed, and/or if I'm doing something wrong on my development (check the mouse events adventure on the post mentioned in the beginning of this discussion), because this is on annoying problem that just doesn't go away... Any suggestion, tip or theory is most welcome!
Thanks in advance for reading and helping. Sorry for any bad english,
-RG

Control.PointToScreen gives different results - why?

I have some Label controls sitting on Panel controls on a Form. I want to get the labels' positions relative to the form's origin so that at run time I can hide the panel and the labels and draw some other text in their place directly onto the form.
Basically, I'm doing the following calculation: Get the absolute screen position of a label with Control.PointToScreen() and convert it back to a relative position with Control.PointToClient(), so either:
Dim newloc As Point = Me.PointToClient(ctl.PointToScreen(Point.Empty))
or
Dim newloc As Point = Me.PointToClient(ctl.Parent.PointToScreen(ctl.Location))
I have found that the two methods sometimes give me different results - putting my new point out of the visible area with negative values! - but haven't been able to determine why. I would have thought they should be identical (and they are, most of the time).
Reading the docs didn't help the first time around, but perhaps I skipped over something... Anyway, I'd be thankful for any help before I start tearing my hair out.
Or, on the other hand, is there a better way to do this?
Edit: Sample results
So, here's a real example.
Label1 at {X=4,Y=6} on Panel1; Label2 at {X=163,Y=6} on the same parent, Panel1. Obviously I'm expecting different X-positions, but Y should be identical for both.
When I run the project both ctl.PointToScreen(Point.Empty) and ctl.Parent.PointToScreen(ctl.Location) give me the same screen location for Label1 at {X=959,Y=119} (the absolute values here can vary, of course, depending on the position of the form itself) and therefore the correct location relative to the form when Me.PointToClient is applied (at {X=5,Y=32}).
The very next lines do the same calculations for Label2 (remember, same Parent, same Y-value within the parent (6)), but the results are totally off: ctl.Parent.PointToScreen() gives me {X=1114,Y=63}. X is almost correct (959-4+163=1118), but Y is nowhere near the 119 I got for Label1. And then ctl.PointToScreen() gives me {X=166,Y=29} - translated back to Form-Coordinates (Me.PointToClient) {X=-784,Y=-2}.
These numbers are calculated and printed to the debug window directly after each other, without moving anything around... Madness.
OK, this is getting rather messy, but I still hope someone has a simple explanation. Thanks!
OK, I found the solution.
I happened to be calling Control.PointToScreen before the control was created: Control.IsHandleCreated = False.
If I ensure that the control is created first (Control.CreateControl) both methods work equally well.
The reason I had differing results on subsequent calls is that the first call to Control.PointToScreen also causes the control to be created (and therefore its parent and any other controls sited on the parent), meaning the second succeeds.
Well, I'm sure glad to be done with this :-)
I think it's the sleep. :) Instead of:
ctl.PointToScreen(Point.Empty)
ctl.Parent.PointToScreen(ctl.Location)
try:
ctl.PointToScreen(Point.Empty)
ctl.PointToScreen(ctl.Location) // Note no .Parent!
instead and you'll see the difference in the x/y coordinates.
Also, try using Control.TopLevelControl or Control.FindForm() to get the outermost Form when doing your PointToScreen math.

Categories