In glfw, glfwSetCursorPos allows you to change the mouse position.
But I don't know how to do it at OpenTK.
When I searched,
there was an article a long time ago telling me
to use openK.Input.Mouse.SetPosition.
But now it says it's a method that doesn't exist at all.
Is there any way to change the mouse cursor position in OpenTK?
And I happened to see GLFW.SetCursorPos on OpenTK.
Can this function be used in OpenTK?
They use pointers, but I don't know how to use them
because I haven't used pointers in OpenTK.
public unsafe static void SetCursorPos(Window* window, double xPos, double yPos)
{
GLFWNative.glfwSetCursorPos(window, xPos, yPos);
}
I went where that function was defined and it was like that.
I'd appreciate a little help.
Related
[please see update at the end]
I'm an old stack overflow user (as most developers on, you know, Earth), but that's my first question here.
I'm trying to use an "air mouse" for gaming (pointing it to the screen), but as the mouse sensor is a gyroscope, there's some problems with off-screen movement that I'd like to try to fix by software.
With this gyro mouse, when the user moves its arm pointing outside the screen, the cursor stops at the screen limit, which is no problem. However, when he moves his arm back, no matter the distance from the screen, the cursor immediately moves on-screen. This causes a giant difference between the air mouse real position and the cursor.
This could be fixed with a simple control over the number of pixels, and direction, travelled off-screen, in conjunction with some event handling. If I could sum the number of "offscreen" pixels traveled in -X, +X, -Y and +Y, it would be possible to prevent/cancel the mouse move event - or set the cursor to its previous position, at the edge of the screen - until the control tells me that the physical mouse is pointing back to the screen. Just then I'd allow the cursor to move freely.
Maybe this isn't that usefull, but it's an interesting problem, and see this working would be fun as hell!
Currently, based on this great project, I can just state when the mouse is off-screen, but cannot control how far and in which direction it is moving, to properly implement what I'm trying. It seems to me that such kind of information would be too low-level for my current Windows knowledge.
So, to be clear: how can I, in C# (other languages accepted, but I'd have to learn a lot ;), get any kind of "delta position" information or direction of movement, when the cursor is at the limit of screen?
With all due respect, I'm not interested on using different kinds of controllers, as well as "you shouldn't do this" answers. I have this problem to solve, with these elements, and it would be great to make this work!
UPDATE:
Ok, here's my progress up to now.
In order to get raw mouse data and know about the mouse movement 'offscreen', I had to register my application, using some windows API functions, to receive WM_INPUT messages.
Based on a very old code I found, I was able to get mouse raw data and implement exactly what I wanted. Except for the fact that this code is based on a WdnProc callback, and so it only works when my application has the focus. And I need it to also work when the focus is elsewhere - after all, I'm trying to improve the pointing provided by a gyro mouse, for third party games.
It seems that I should use a hook (a good example here), but I have no idea how to hook for input messages. Tried to merge the code of the two links above, but the first one needs the message.LParam that is passed to the WdnProc - which seems to be unavailable when merely hooking mouse events.
Now I'm way out of my league to make some real progress. Any ideas?
One of the simplest solution to get cursor position and then detect its movement regardless where the cursor is to use win32 API from the user32.dll.
Here is a simple code which gets the cursor position in every 10ms using the timer in the C# Windows Form application and displays it as the title of the window.
using System;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using System.Drawing;
public partial class Form1 : Form
{
Timer timer = new Timer();
public Form1()
{
InitializeComponent();
timer.Interval = 10;
timer.Tick += Timer_Tick;
timer.Start();
}
private void Timer_Tick(object sender, EventArgs e)
{
// do here whatever you want to do
// just for testing...
GetCursorPos(out Point lpPoint);
this.Text = lpPoint.X + ", " + lpPoint.Y;
}
[DllImport("user32.dll")]
public static extern bool GetCursorPos(out Point p);
}
I am trying to make some simple app which allows you to paint on canvas with your right hand.
Fortunately I know how to make painting function but I have a little problem with other thing.
As you know SDK provides you to use a control named KinectRegion which has KinectCursor which is the representation of user's hand.
The problem is that I don't know why when I am trying to paint something my painting path starts in different position than my KinectCursor is ?
I don't have this problem when I use my own right hand mapping function but in that case I can't use other things like KinectCircleButton because I don't have KinectRegion.
Anyone know how to get or to map KinectCursor position(x,y) from KinectRegion ?
visualisation of my problem:
[IMG]http://i58.tinypic.com/iqgemt.png[/IMG]
I'm working on the similar project on painting with Kinect. Actually the position you need is in the HandPointer. You can get the position of your hand relative to UIElement by a method called GetPosition(UIElement element) which obviously takes that element as parameter.
An example of using the method looks like this:
public partial class MainWindow
{
public Point position;
public MainWindow
{
KinectRegion.AddHandPointerMoveHandler(this, OnHandPointerMove);
}
private void OnHandPointerMove(object sender, HandPointerEventArgs e)
{
position = e.HandPointer.GetPosition(myCanvas);
}
}
Now the thing is your hand position and kinect hand position has a gap in both X and Y. Try to make these gaps zero. Then your hand will exactly map the painting point. For example output x,y coordinates at once for both hand point and painting brush point. and get the individual gap for both x coordinates and y coordinates. now these difference should be deducted form x and y respectively. So then your hand and paint brush points will map accordingly.
I'm working on a miniature golf game in XNA, I originally had everything in Game.cs (main), but I now want it to be more object-oriented, so I made separate class for most of my stuff.
When I had everything in Game.cs, it was working fine, now it doesn't.
What is happening is this:
When my cursor is at the top left corner of the game window, it's like X=200, Y=50.
It's supposed to be X=0, Y=0.
Even when I look for the 0, 0 position, it's way outside the game window.
Does anyone know what could be causing this?
How is that possible? Your cursor position is the mouse position. Simply draw the cursor there.
Unless you are talking about the Windows cursor. In that case, yes, the input data in XNA will not match the Windows cursor movement. They probably apply some modifiers for acceleration, etc. You have to interprete the input data yourself. In other words, draw your own cursor.
I have a Drag() method on form_MouseDown event. I also have a click event on the form. The problem is that if I click on the form, MouseDown event gets triggered and it never gets the chance to trigger click event.
What is the best way to solve this issue? I was thinking counting pixels if the form is actually dragged, but there has to be a better way. Any suggestions?
I was thinking counting pixels if the form is actually dragged, but there has to be a better way.
Nope, that's exactly how you have to do it.
This isn't just a software limitation; it's very much a practical one as well. If you think through the problem from a user's perspective, you'll immediately see the problem as well as the solution. Ask yourself, what is the difference between a click and a drag?
Both of them start with the mouse button going down over the object, but one of them ends with the mouse button going back up over the object in the same position and the other one ends with the mouse button going back up in a completely different position.
Since time machines haven't been perfected yet, you have no way of knowing this in advance.
So yes, you need to maintain some kind of a distance threshold, and if the pointer moves outside of that distance threshold while it is down over the object, then you consider it a drag. Otherwise, you consider it a click.
That distance threshold should not be 0. The user should not be required to hold the mouse completely still in order to initiate a click. A lot of users are sub-par mousers. They are very likely to twitch slightly when trying to click. If the threshold is 0, they'll end up doing a lot of inadvertent dragging when they try to click.
Of course, you don't actually have to worry about any of this or compute the drag threshold yourself. Instead, use the Windows default values, obtainable by calling the GetSystemMetrics function and specifying either SM_CXDRAG or SM_CYDRAG. (These might be exposed somewhere by the WinForms framework, but I don't think so. It's just as easy to P/Invoke them yourself.)
const int SM_CXDRAG = 68;
const int SM_CYDRAG = 69;
[DllImport("user32.dll")]
static extern int GetSystemMetrics(int index);
Point GetDragThreshold()
{
return new Point(GetSystemMetrics(SM_CXDRAG), GetSystemMetrics(SM_CYDRAG));
}
In the field of UX/UI, this sort of thing is called hysteresis or debouncing, by analogy to the use of these terms in physics and electronics.
I found this solution, although it is for a double-click and a mouse down events:
void pictureBox_MouseDown(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left && e.Clicks ==1)
{
PictureBox pb = (PictureBox)sender;
DoDragDrop((ImageData)pb.Tag, DragDropEffects.Copy);
}
}
source: http://code.rawlinson.us/2007/04/c-dragdrop-and-doubleclick.html
Unfortunatelly, at the point of time when "button-is-pressed" you don't know yet if the desired action is just a click or a drag-drop. You will find it out it later.
For a click, the determinant is "no movement" and "button up".
For a drag, the determinant is "movement" and "button up".
Hence, to disambiguate those interactions, you have to track not only the buttons, but also the movement. You do not need to track the overall movement, only the movement between button-down and button-up is interesting.
Those events are therefore a good place to start/stop the Mouse.Capture mechanisms (to dynamically present drag adorners and drop location hints), or, in simplier form - to store the origin and target of movement vector and check if the distance is > D (even if movement occurred, there should be some safe minimal distance within which the DRAG is canceleed. The mouse is "jaggy" sometimes, and people would really don't like your app to start dragging when they double click at the end of fast mouse pointer movement :) )
I have a Rectangle which can be dragged within some boundaries. This works perfect with the mouse.
As soon as I set IsManipulationEnabled to true the mouse-events don't work anymore.
However I need this to get touch-events on the rectangle. Therefore I set it to true.
I'm trying to compute all changes in the ManipulationDelta-event like in the following function.
Scaling works already pretty good, but moving the object via dragging it with the finger is very choppy + plus sometimes the Rectangle jumps back and forth.
private void UserControl_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
//Scaling works already pretty good
RangeBar.Width *= e.DeltaManipulation.Scale.X;
//Moving the element is very choppy, what am I doing wrong here?
this.startX = this.startX + e.DeltaManipulation.Translation.X;
RangeBar.SetValue(Canvas.LeftProperty, startX);
}
I'd try using the CumulativeManipulation instead.
Whenever I need to do a UI element movement via dragging, I don't try to reuse the same variable and modify it by the delta and then reuse that same variable for setting position. It almost always gives me stability issues regardless of platform. Instead, try storing a variable when dragging starts and adding the delta to that variable only when you need to update position. So something more like this:
Point origin;
void MouseDown(Point location)
{
origin = location;
}
void MouseDrag(Vector cumulativeOffset)
{
SetControlLocation(origin+cumulativeOffset);
}
Also, what's the source of the ManipulationEvent? You'll definitely want to make sure it's not the current rectangle, or that will definitely cause the issues you're seeing.