I created a keypad app that stays on top, but does not take the focus so that on a touch screen it will forward whatever keys you press to the active application via SendKeys.
It works perfectly with every application I have tried it with... except, of course, for the one I actually need it to work with which is a Point Of Sale application. The POS application lets the user type in item codes on the keyboard but it doesn't have a good keypad for touchscreens so that's why I'm trying to create an external one for it (since I don't have access to the POS application code).
It actually does work when you first try it, but then it's pretty sporadic. Using the keyboard directly always works, so I'm not sure why SendKeys only works sometimes with this application. I've tried implementing it several ways... sending the keys as they are pressed, sending them altogether when the user presses the enter button on the keypad, copying the keys to the clipboard and then using send keys to do a Ctl-V and then Enter.
What other options do I have to simulate a key press to another application? SendKeys doesn't seem to perfectly simulate key presses, so is there a lower level mechanism I can tap into?
I should mention that when it doesn't work, what happens is that I get a beep from the POS application as though I'd pressed an invalid key. So it's not that it isn't getting some kind of input but clearly it isn't getting the key I'm sending it the same way it would from an actual keyboard.
I found this Windows Input Simulator: https://inputsimulator.codeplex.com/
Super easy to use and way, way, way better than SendKeys. And as a bonus, in addition to letting you simulate input, it also lets you set low level keyboard/mouse hooks.
Related
Summary
How do you prevent the shift-key, which is part of a global hotkey, from interfering when sending text to the active window in Windows, by calling System.Windows.Forms.SendKeys.Send("abc") from another process when a shift-containing-global-hotkey is activated?
The problem
The window which is active when the hotkey is activated misinterpret the text sent to since the shift-key, which is part of the hotkey, is still pressed while it receives and process the the text sent to it. It is humanly impossible to release the shift-key fast enough so it is still not pressed when the text is received.
It is not possible to change the hotkey to not contain shift, and even if were possible, the ctrl-key would interfere with the processing in a similar way.
The sending application is run as a normal user without admin privileges, and UAC is enabled.
There is a background application running in windows. It is a normal .NET C# WinForm application, started by the user and running without a visible GUI.
The background application has registered a global hotkey, that is, a hotkey that can be pressed anywhere in Windows, no matter which application is currently active.
The hotkey is: <shift>+F9 (RegisterHotKey(hWnd, hotkeyId, 4/*MOD_SHIFT*/, 120 /*Keys.F9*/);
When the hotkey is activated, the background application calls System.Windows.Forms.SendKeys.Send("abc")
The active window receives the text "abc", but since the shift-key from the hotkey is still pressed, the result end up as "ABC".
The question
What are the possible ways to make sure the text sent end up the same after the receiving window get, process and interpret it?
That is, when sending "abc" to a running instance of notepad.exe by pressing <shift>+F9, the text showing up in notepad should be "abc" and not "ABC".
As far as I can tell,
Check the Shift modifier status, and only send your keys after you verify that it is not pressed
Don't use SendKeys
I think that #1 is pretty self explanatory, but keep in mind that even if you verify that Shift is not pressed before you start sending keys, Its possible that the user can still press Shift or another modifier while you are sending keys, or even worse its possible for the active window to lose focus and stop it completely. If you're designing a program that simply inserts user-defined text after a hotkey press and the user is expecting it, then this is not a big deal and is the appropriate way to to this.
You have a few options for #2, I'd suggest looking into using SendMessage with An apropriate message (WM_CHAR, WM_SETTEXT, WM_KEYDOWN, or etc) to send a message directly to the window in question.
After everything is said, its important to realise that this is a really uncertain process. You can never guarantee that simulating keyboard inputs or sending key messages will register as you would like them to and it may largely depend on the application your sending messages to (esp in the case of SendMessage).
During my search, I've seen many different versions of this question, yet somehow none of the solutions provided solved my problem.
It's really quite simple, I just want to simulate holding down a key on the keyboard through code. I want to try and make a character in a game walk forward constantly, so I just need to make a program that simulates holding down the 'W' key. I've seen a lot of people were using Windows Forms for this, I don't know if it actually is the right application but if it works I'm happy.
Just very quickly sending the key doesn't work, so simply calling SendKeys.Send('W') every 30ms does not make my character move in-game. So, what can I do to simulate holding down a key on the keyboard?
Here you go, a .Net library can simulate key press
https://inputsimulator.codeplex.com/
Edit
To get the key works, the external program must in active window, and your program need to run in background.
Windows Forms provides the SendKeys method which can simulate text
entry, but not actual key strokes. Windows Input Simulator can be used
in WPF, Windows Forms and Console Applications to synthesize or
simulate any Keyboard input including Control, Alt, Shift, Tab, Enter,
Space, Backspace, the Windows Key, Caps Lock, Num Lock, Scroll Lock,
Volume Up/Down and Mute, Web, Mail, Search, Favorites, Function Keys,
Back and Forward navigation keys, Programmable keys and any other key
defined in the Virtual Key table.
It provides a simple API to simulate
text entry, key down, key up, key press and complex modified key
strokes and chords.
I have been building a very small game in the Windows API, and in the main message loop I use GetAsyncKeyState() to test if a user is pressing the arrow buttons. I use this instead of WM_KEYDOWN because with WM_KEYDOWN there is an initial pause after the first press, and I don't want to modify a user's settings. My antivirus program flags the game as a keylogger program, is there an alternative way about this?
How is the anti-virus program supposed to guess that you are not using GetAsyncKeyState() to spy on the keyboard and log keys? You tell it of course, make an exclusion. If you're worried that your future customers are not so easily convinced then go back to using WM_KEYDOWN/UP. Use an array of 256 bools to keep track of the key state. Set it to true on DOWN, regardless of how many you get, false on UP. Also check if the scanner is happy when you stop calling the API function when your app loses focus. Pay attention to WM_ACTIVATEAPP.
What I am trying to do is have a helper application that a user can use touch input to affect a second application. I have been able to send keystrokes to the second application, but the problem I am having is when I want to hold a button down.
For example on my application, I want to be able to hold down a button which would simulate a ctrl key down. And while this button is touched, I want to be able to interact with the second application. And if the user lets go of the button, then the ctrl key is undressed. I can kind of get this working, except when the user does anything on the second application, the button that was held down is unpressed (because the other application gained focus).
I don't care if I have to go WPF or windows forms, just as long as I can get it working. Windows 8 or 8.1 only is acceptable as well (all clients will be 8.1).
Any help would be appreciated!
Note I added to a comment below.
The second application is one I haven't created, it could be anything really. A scenario would be my application having a ctrl button that you could hold press and hold, for example, and in outlook click a link. Or pressing and holding a shift button in my app, while drawing with a pen in photoshop to draw a straight line. I am able to send key strokes, but just can't handle the "hold" touch command.
Since it's been so long, I'm creating a new answer. I did the research, and I'm pretty sure I know what's going on. But I'm going to mention all the official resources I examined before coming to my conclusion.
Possible packaged solutions
First off, the new Windows Input Simulator might fix all your troubles right out of the box. If you need the Windows API, which I'll be talking about below, check PInvoke.net first to see if they have documentation for the call you're trying to make.
The Windows API way
The best place to start is the User Interaction article on MSDN. There's a bunch of new Winu8 Touch APIs there, but you're probably interest in the legacy Keyboard input article.
Every window for an application must have a Windows Procedure (a.k.a WindowsProc) that's responsible for reacting to messages it cares about (e.g. a button click, a message indicating the Window needs to draw its GUI, or the WM_QUIT event that alerts it to gracefully dispose of the resources held by the Window. This procedure is also responsible for handling messages from input devices, like mouse-clicks and keys on the keyboard.
In your case, you're more interested in making the Window think there's a message from the keyboard when there isn't. That's what the SendInput API call is for; it lets you insert an array of INPUT messages, be they keyboard, mouse, or other input device directly into the queue, bypassing the need for the user to physically act. This easy API call specifically accepts MOUSEINPUT, KEYBDINPUT, or HARDWAREINPUT messages.
For the keyboard, you'll get a message when a key is pressed (WM_KEYDOWN) and when it is released (WM_KEYUP), so to determine hotkeys like CTRL+C, you have to watch for WM_KEYDOWN message for the letter C that were received after a WM_KEYDOWN for the CTRL key but before its WM_KEYUP message.
Managing input device messages
To simulate input devices, use SendInput to pass along the WM_KEYDOWN and/or WM_KEYUP message(s) to the target Window. But don't forget that an application can have more than one window. There are API calls to get the different Windows, but it'll be up to you to write code to find it before you can use SendInput on it.a
To find out what a window believes about an input device, use GetAsyncKeyState. You may not be able to trust it if you've meddled with APIs related to input devices.
There is a BlockInput call on a window which denies all messages except SendInput calls from the thread which blocked it. In most cases, re-enabling input as soon as possible is the right thing. The documentation say that if the blocking thread dies, BlockInput is disabled. A similar but less harsh call is EnableWindow which prevents a window from receiving input focus.
The API for windows includes the ability to register hooks, which let you specify kinds of messages and/or certain windows to be reviewed by a user-specified function.
I would really like to know why you need this to be in two different applications, but here's the best I can think of.
In the applications, you should be able to subscribe to KeyDown, KeyUp, Focus, and Blur (lost focus). I'm not clear on if this is an actual button or if its touch input, but whatever the case may be, assume KeyDown is whatever event fires when the user is "simulating" the ctrl key being pressed, and KeyUp is whatever event fires when the user is ceases to "simulate" the ctrl key being down.
Set up the App1 so when it gains focus, it communicates with the App2 the state: depressed, or not depressed. Every time KeyDown or KeyUp fires, send a message to App2.
When App1's Blur event fires, stop sending messages to App2. Even though App1 will no longer have the button depressed, App2 won't know it and can continue to behave as though the button was depressed until App2 regains focus and can go back to sending messages again.
If it were me, I would have App2 have all the same logic as App1, so the moment App2 gets in Focus, it begins handling the up/down state itself. You may want to have the two applications do some kind of "handshake" when a blur/focus event happens to make sure the state is preserved when switching between. When App2 gets the Blur event, it transfers to App1 the state and they shake hands again, so App1 knows its now responsible for managing the state.
This is basically having the apps cooperate via "tag-team." They keep some state synchronized between each other, "handing off" the responsibility when the blur/focus events fire. Since you cannot know that Blur will fire on one app before Focus fires on the other, you will need to use the same mechanism that communicates the state of this "simulated button" to coordinate the apps so they never interfere with each other.
Something tells me that this doesn't completely solve your problem, but hearing why it doesn't will certainly get everyone closer to thinking out the rest of the way. Let me know the twist ending, eh?
I know SendKeys can emulate typing and send individual keystrokes but I'm looking for a way to emulate holding down a key. My goal is an app that acts as a joypad for a windowed game. Imagine holding a tablet PC and having arrow keys and basic buttons on either side of the screen and pressing them emulates keyboard presses which a game then receives. That's my goal.
When a finger presses down on a button, I want to translate that to pressing down on a keyboard key. Upon releasing that button, I want to release that key. I know this'll probably require some low level code and I'm ok with that.
NOTE: I do NOT want to emulate these events in my own app, but system wide. I'm writing this for an XNA game of mine and it's not listening for direct, focus key events, it's checking the state of the keyboard, (as I assume most games do) and responding to that. I want my app to trick my game into thinking a key is held down at the proper times.
If you want to emulate it system-wide, you can use SendInput from Core.dll. There are more detailed information on how to do that here.
However, it is very rare, that you will actually need that. Instead, you should try to manipulate with only your own app (it seems more like that is what you need in this case). You can simply just make variables for all the keys, you want to read, and then read those variables from both keyboard AND tablet.
EDIT
oh, this link will probably be better...