I need to make a small system tray app which monitors the cursor position systemwide and displays or hides the onscreen keyboard depending on the cursor handle ID. if the cursor is in a textbox (position equals IBeam) in IE,for example, the keyboard pops up.
I have code for the the system tray app (formless app) but cannot find a way of making it monitor the system. Any help with a function to monitor the system for cursor position would be welcome. thanks.
Monitor the system cursor position:
private void Pos()
{
for (; ; )
{
Thread.Sleep(10);
Point position = Cursor.Position;
//You can use these to pass to your system tray or whereever you need it.
somePublicXVar = position.X;
somePublicYVar = position.Y;
}
}
public void PointPosition()
{
Thread pointThread = new Thread(new ThreadStart(Pos));
pointThread.Start();
}
To be event-driven you'll need to use SetWindowsHookEx. You cannot do it directly through .NET, but must inject a DLL. Here is a MSDN article on making a mouse hook. This is done using System.Runtime.InteropServices to import user32.dll. The MSDN article gives step-by-step instructions on calling SetWindowsHookEx, CallNextHookEx, and UnhookWindowsHookEx from C#. CodeProject also has an article on making system-wide hooks in .NET.
Related
I am building a game for Windows PCs and I need to change the cursor icon when the user is over clickable UI elements. I have found this command Cursor.SetCursor(texture2d, vector2) but it has some lag.
When the mouse is over the UI elements it changes but it has some delay which is really annoying (This is what other users said about that anyway).
After some reading, I learned that Unity basically just changes the cursor icon in the software level. i.e. Hides the cursor and displays my image, and make it follow the cursor position.
My question is: How can I change the icon in hardware level, again, in windows builds only.
When I searched "Changing mouse cursor in C#", I have found the windows.forms option (which doesn't work in unity) and a c++ code but it wasn't full (Only methods' names), and I don't know how to run it in C#…
The SetCursor not work very good on all Windows app, but is the right way to do it.
Cursor.SetCursor(cursorTexture, hotSpot, cursorMode);
Another way is to make a Little fake hidding the mouse cursor and making a GUI cursor for your desire case. You can add more conditions for each event you like to customize.
var OnMouseEnterCursor:Texture2D;
var cursorSizeX: int = 32; // your cursor width
var cursorSizeY: int = 32; // your cursor height
var MouseEnterCond = false;
function OnMouseEnter()
{
MouseEnterCond = true;
Screen.showCursor = false;
}
function OnMouseExit()
{
MouseEnterCond = false;
Screen.showCursor = true;
}
function OnGUI()
{
if(MouseEnterCond )
{
GUI.DrawTexture (Rect(Input.mousePosition.x-cursorSizeX/2 + cursorSizeX/2, (Screen.height-Input.mousePosition.y)-cursorSizeY/2 + cursorSizeY/2, cursorSizeX, cursorSizeY),OnMouseEnterCursor);
}
}
If there is a way to enforce hardware cursors in Unity the world has not found it yet.
Unity's own documentation explains it pretty well:
A hardware cursor is preferred on supported platforms through the function which you have yourself found (remember to set the cursor-mode to auto), but it will automatically, and uncontrollably, fall back on software for unsupported platforms and resolutions.
An important thing to note is that there is a special field (which the reference only peripherally mentioned) in the project settings/player settings menu called the "Default Cursor".
This is the only supported hardware cursor on e.g. windows store apps.
And remember to set your textures to type "cursor" in their configurations,.
Finally, keep in mind that windows only supports the 32x32px size. This may force Unity's hand when selecting the render type.
The default cursor settings in the player settings worked in the editor but it did not show up in a build. Also limiting the texture size to 32x32. I resolved this issue by using Cursor.SetCursor method in a script to change the cursor.
[SerializeField] Texture2D cursor;
void Start() {
Cursor.SetCursor(cursor, Vector3.zero, CursorMode.ForceSoftware);
}
I need the application idle time in my software. For that reason, I made a helper class ApplicationIdleHelper which implements the IMessageFilter interface.
This works fine and if my application is in idle for some time, I'm showing a DevExpress WaitForm using this line of code:
SplashScreenManager.ShowForm(typeof(WaitForm));
In this WaitForm I show the user some information about what's being done in the background. If the user moves the mouse or presses some keys I close the WaitForm like this:
SplashScreenManager.CloseForm();
Here's the problem explained in steps:
Mouse cursor is on the form.
User doesn't do anything for some time -> idle time -> so I show the WaitForm.
Now I get a MouseMove message in my PreFilterMessage method? BUT WHY? Mouse didn't move. No keys pressed? Because I get a MouseMove message my application thinks, the user did some input and automatically closes the WaitForm.
Same behavior if I close the WaitForm.
Here's a sample application, so you should be able to reproduce the behavior:
https://drive.google.com/file/d/0BxabrokJG-OWV3FLV2hNNVk5NjQ/view?usp=sharing
The DevExpress documentation says:
Wait Forms and Splash Screens are displayed by a Splash Screen Manager
in a separate thread.
Maybe this has something to do with that behavior?
Hope somebody can explain me, why I geht a MouseMove message in my PreFilterMessage function, after showing or closing the WaitForm.
Thank you in advance.
The most likely cause of this is that the mouse is sensitive to environmental noise. It's entirely possible for a mouse to experience a little bit of jitter that causes it to report very small movements, which ultimately work out to zero change in position. Alternatively, and this isn't verified, Windows or some other software on the system could be generating extra mouse move messages to make sure that everyone stays in sync with the current mouse position.
Either way, the most stable solution is to decide on an amount of motion you consider "real" (see threshold below), and then:
Capture the mouse position when you're going to sleep.
Every time you get a WM_MOUSEMOVE message (or a MouseMove event) calculate the amount of that motion, as in:
Point cached; // from when you went to sleep
Point current; // determined from the window message/event
double move = Math.Sqrt(Math.Pow(cached.X - current.X, 2) +
Math.Pow(cached.Y - current.Y, 2))
if (move > threshold)
{
// Wake up
}
else
{
// Ignore and optionally update the cached position
// in case the mouse is slowly drifting
}
(Note that you don't necessarily need to calculate the real distance that way, you could just use ΔX+ΔY)
Whenever you're dealing with hardware, you need to be ready for it to send you updates that you aren't expecting. Pressing a button for example, can cause the physical contact to bounce, which causes multiple press/break signals at the electrical level. Most of the time, the hardware is designed to filter the noise, but sometimes this seeps through.
I am stumped as to how I would go about storing the screen/cursor position and calling it back to the exact pixel after it has finished completing the rest of the function.
I am able to run my code when I click a button, but I want to be able to return the cursor to the position it was in when the button was clicked, so I can loop it from that exact location.
Any help would be greatly appreciated
I am writing in C# and using VS2010
It's bad user experience to modify the users cursor, that is why the mouse cursor position is generally exposed as Read-Only information.
In case of Windows Forms (if you are able to use System.Windows.Forms assembly) you can use Position property from Cursor class. It allows you to get and set cursor position.
But is's a bad practice to move cursor from your code.
using System.Windows.Forms;
namespace MyApplication
{
class MyClass
{
void Go()
{
var previousPosition = Cursor.Position;
// Do smth
Cursor.Position = previousPosition;
}
}
}
I have following Situation: I have a special 3D program which I need to make able to react on multi-touch events without changing the program itself. Therefore I need a mapper program, which receives the multi-touch events of Windows 7, converts them in corresponding mouse and keyboard events and send this emulated events to the 3D program, so that it can process those events.
I have already read and tried a lot and my current approach is to have an almost transparent overlay window over the 3D programm to catch multi-touch events. But this is also the problem, I'm unable to manage to forward the generated mouse events to the underlying 3D programm in a usable way. Right now I used pinvoke functions like mouse_event, SendMessage and so on, but none of them worked for me. Since I always had to bring the 3D program to front, send the event and afterwards I needed to put my mapper programm to front again. This works quite crappy.
So my question is more or less, is there a nice working approach to do the things I mentioned above? Or at least a nice way to send mouse and keyboard events to processes in the background?
Hope anyone could give me a hint or suggestion....
Here are the way I simulate mouse clicks right now:
private void OnMouseDown(object sender, MouseEventArgs e)
{
Point position = this.PointToScreen(new Point(e.Location.X, e.Location.Y));
//Simulate the mouse event on the given position
this.Visible = false;
Cursor.Position = position;
mouse_event(Convert.ToUInt16(MouseEventFlags.LEFTDOWN), 0, 0, 0, 0);
}
private void OnMouseUp(object sender, MouseEventArgs e)
{
Point position = this.PointToScreen(new Point(e.Location.X, e.Location.Y));
//Simulate the mouse event on the given position
Cursor.Position = e.Location;
mouse_event(Convert.ToUInt16(MouseEventFlags.LEFTUP), 0, 0, 0, 0);
this.Visible = true;
}
//Get a handle to the mouse event manager
[DllImport("user32.dll")]
private static extern void mouse_event(uint dwFlags, uint dx, uint dy, uint dwData, int dwExtraInfo);
maybe have a look at InfoStrat.VE - Bing Maps 3D for WPF and Microsoft Surface.
They made the Bind maps control usable for touch.
Maybe that helps.
There isn't a way to direct mouse/keyboard events to a particular process in Win32. That being said, however, you might be able to get this approach to work:
Register your mapper window as a touch window to get touch events.
Add a handler for WM_NCHITTEST in your message loop, call DefWindowProc() to get the default handler, and then convert any HTCLIENT returns to HTTRANSPARENT. This will cause Windows to pass any mouse events through your window, onto the underlying window.
This probably won't work if your mapper window lives in a separate process from your client program; if that's the case, you'll have to do some hacking with AttachThreadInput. I don't recommend this, incidentally, as merged thread queues are very prone to bugs.
I have a C# Windows application that I want to ensure will show up on a second monitor if the user moves it to one. I need to save the main form's size, location and window state - which I've already handled - but I also need to know which screen it was on when the user closed the application.
I'm using the Screen class to determine the size of the current screen but I can't find anything on how to determine which screen the application was running on.
Edit: Thanks for the responses, everyone! I wanted to determine which monitor the window was on so I could do proper bounds checking in case the user accidentally put the window outside the viewing area or changed the screen size such that the form wouldn't be completely visible anymore.
You can get an array of Screens that you have using this code.
Screen[] screens = Screen.AllScreens;
You can also figure out which screen you are on, by running this code (this is the windows form you are on)
Screen screen = Screen.FromControl(this); //this is the Form class
in short check out the Screen class and static helper methods, they might help you.
MSDN Link, doesn't have much..I suggest messing around in the code by yourself.
If you remember the window's location and size, that will be enough. When you set the position to the previously used position, if it happened to be on the second monitor it will go back there.
For example, if you have 2 monitors, both sized 1280x1024 and you set your window's left position to be 2000px, it will appear on the second monitor (assuming the second monitor is to the right of the first.) :)
If you are worried about the second monitor not being there when the application is started the next time, you can use this method to determine if your window intersects any of the screens:
private bool isWindowVisible(Rectangle rect)
{
foreach (Screen screen in Screen.AllScreens)
{
if (screen.Bounds.IntersectsWith(rect))
return true;
}
return false;
}
Just pass in your window's desired location and it will tell you if it will be visible on one of the screens. Enjoy!
You can get the current Screen with
var s = Screen.FromControl(this);
where this is the Form (or any control on the Form). As about how to remember that is a little tricky, but I would go for the index in the Screen.AllScreens array, or maybe s.DeviceName. In either case, check before using the settings on startup, to prevent using a monitor that was disconnected.
The location of the form will tell you which screen the form is on. I don't really understand why you'd need to know what screen it is on, because if you restore it using the location you saved it should just restore to the same location (maybe you can expand as to why).
Otherwise you can do something like this:
Screen[] scr = Screen.AllScreens;
scr[i].Bounds.IntersectsWith(form.Bounds);
Each screen has a Bounds property which returns a Rectangle. You can use the IntersectsWith() function to determine if the form is within the screen.
Also, they basically provide a function that does this as well on the Screen class
Screen screen = Screen.FromControl(form);
You can use the 'Screen' object:
System.Windows.Forms.Screen
Start playing with something like this:
Screen[] screens = Screen.AllScreens;
for (int i = 0; i < screens.Length ; i++)
{
Debug.Print(screens[i].Bounds.ToString());
Debug.Print(screens[i].DeviceName);
Debug.Print(screens[i].WorkingArea.ToString());
}
It may get you what you need