Capturing Keyboard events in C# windows service - c#

I have created a windows service in c#. How can I call a function whenever a key is pressed, and in this function get the value of the key. I need it for both key up and key down.
My goal is to send an email if a certain series of keys were pressed. For example if you press h-e-l-l-o, no matter where you type it even on the desktop, the program should send the email.
Also, it's ok for me to implement it in another way. I need a program that runs in the background and do something when a key is pressed.
How can I do such thing?
An example would be helpful.

As somebody has already commented, a Windows service does not have a foreground window which can receive keyboard focus or Windows keyboard messages. So your only option to achieve exactly what you're after is to use a low level event hook to receive keyboard events.
Note, however, that these are system-wide events and your service might be flooded with them. They're useful in applications like screen readers and other assistive technology, but I'd recommend trying to think about whether there is some other way to do what you want to do before using this approach. For example, can you just run an application in the system tray and subscribe to WM_HOTKEY messages via calls to RegisterHotKey?
Here's an example of a low-level keyboard hook in C#.

The best solution I found:
Generally author creates .NET wrapper on a low-level methods from user32.dll assembly that make using those methods quite pleasant and enjoyable.
private LowLevelKeyboardListener _listener;
_listener = new LowLevelKeyboardListener();
_listener.OnKeyPressed += _listener_OnKeyPressed;
_listener.HookKeyboard();
void _listener_OnKeyPressed(object sender, KeyPressedArgs e)
{
this.textBox_DisplayKeyboardInput.Text += e.KeyPressed.ToString();
}
here the original link

Related

Is it possible to create a touch application to interact with another application, "sharing" focus betwen the two?

What I am trying to do is have a helper application that a user can use touch input to affect a second application. I have been able to send keystrokes to the second application, but the problem I am having is when I want to hold a button down.
For example on my application, I want to be able to hold down a button which would simulate a ctrl key down. And while this button is touched, I want to be able to interact with the second application. And if the user lets go of the button, then the ctrl key is undressed. I can kind of get this working, except when the user does anything on the second application, the button that was held down is unpressed (because the other application gained focus).
I don't care if I have to go WPF or windows forms, just as long as I can get it working. Windows 8 or 8.1 only is acceptable as well (all clients will be 8.1).
Any help would be appreciated!
Note I added to a comment below.
The second application is one I haven't created, it could be anything really. A scenario would be my application having a ctrl button that you could hold press and hold, for example, and in outlook click a link. Or pressing and holding a shift button in my app, while drawing with a pen in photoshop to draw a straight line. I am able to send key strokes, but just can't handle the "hold" touch command.
Since it's been so long, I'm creating a new answer. I did the research, and I'm pretty sure I know what's going on. But I'm going to mention all the official resources I examined before coming to my conclusion.
Possible packaged solutions
First off, the new Windows Input Simulator might fix all your troubles right out of the box. If you need the Windows API, which I'll be talking about below, check PInvoke.net first to see if they have documentation for the call you're trying to make.
The Windows API way
The best place to start is the User Interaction article on MSDN. There's a bunch of new Winu8 Touch APIs there, but you're probably interest in the legacy Keyboard input article.
Every window for an application must have a Windows Procedure (a.k.a WindowsProc) that's responsible for reacting to messages it cares about (e.g. a button click, a message indicating the Window needs to draw its GUI, or the WM_QUIT event that alerts it to gracefully dispose of the resources held by the Window. This procedure is also responsible for handling messages from input devices, like mouse-clicks and keys on the keyboard.
In your case, you're more interested in making the Window think there's a message from the keyboard when there isn't. That's what the SendInput API call is for; it lets you insert an array of INPUT messages, be they keyboard, mouse, or other input device directly into the queue, bypassing the need for the user to physically act. This easy API call specifically accepts MOUSEINPUT, KEYBDINPUT, or HARDWAREINPUT messages.
For the keyboard, you'll get a message when a key is pressed (WM_KEYDOWN) and when it is released (WM_KEYUP), so to determine hotkeys like CTRL+C, you have to watch for WM_KEYDOWN message for the letter C that were received after a WM_KEYDOWN for the CTRL key but before its WM_KEYUP message.
Managing input device messages
To simulate input devices, use SendInput to pass along the WM_KEYDOWN and/or WM_KEYUP message(s) to the target Window. But don't forget that an application can have more than one window. There are API calls to get the different Windows, but it'll be up to you to write code to find it before you can use SendInput on it.a
To find out what a window believes about an input device, use GetAsyncKeyState. You may not be able to trust it if you've meddled with APIs related to input devices.
There is a BlockInput call on a window which denies all messages except SendInput calls from the thread which blocked it. In most cases, re-enabling input as soon as possible is the right thing. The documentation say that if the blocking thread dies, BlockInput is disabled. A similar but less harsh call is EnableWindow which prevents a window from receiving input focus.
The API for windows includes the ability to register hooks, which let you specify kinds of messages and/or certain windows to be reviewed by a user-specified function.
I would really like to know why you need this to be in two different applications, but here's the best I can think of.
In the applications, you should be able to subscribe to KeyDown, KeyUp, Focus, and Blur (lost focus). I'm not clear on if this is an actual button or if its touch input, but whatever the case may be, assume KeyDown is whatever event fires when the user is "simulating" the ctrl key being pressed, and KeyUp is whatever event fires when the user is ceases to "simulate" the ctrl key being down.
Set up the App1 so when it gains focus, it communicates with the App2 the state: depressed, or not depressed. Every time KeyDown or KeyUp fires, send a message to App2.
When App1's Blur event fires, stop sending messages to App2. Even though App1 will no longer have the button depressed, App2 won't know it and can continue to behave as though the button was depressed until App2 regains focus and can go back to sending messages again.
If it were me, I would have App2 have all the same logic as App1, so the moment App2 gets in Focus, it begins handling the up/down state itself. You may want to have the two applications do some kind of "handshake" when a blur/focus event happens to make sure the state is preserved when switching between. When App2 gets the Blur event, it transfers to App1 the state and they shake hands again, so App1 knows its now responsible for managing the state.
This is basically having the apps cooperate via "tag-team." They keep some state synchronized between each other, "handing off" the responsibility when the blur/focus events fire. Since you cannot know that Blur will fire on one app before Focus fires on the other, you will need to use the same mechanism that communicates the state of this "simulated button" to coordinate the apps so they never interfere with each other.
Something tells me that this doesn't completely solve your problem, but hearing why it doesn't will certainly get everyone closer to thinking out the rest of the way. Let me know the twist ending, eh?

Creating a Process in the background that will listen to keyboard

I have a project that runs in the background in a different process, I want it to be able to react to keyboard everywhere, for example I run the project, and afterwards I do other stuff in the computer such as browsing, facebook, watching movies etc.., and every time I press F9 I want my project to show up. Same as how you press a combination of keys to invoke Babylon... I want to implement it in C#, I have no idea how to begin.
You can register a hotkey with the RegisterHotKey API function. You can see an example of its usage from C# here.
I think you need to write a system-wide keyboard hook, check here for details:
https://stackoverflow.com/a/1764434/559144
How do I grab events for all applications? An example system-wide hook.

Capture event for fingerprint reader in no active application

I have an application that validates users through a fingerprint reader. The validation is done in a method that i subscribed to manage the event, it looks like this.
FingerprintVerificationControl.OnComplete+=new DPFP.Gui.Verification.VerificationControl._OnComplete(FingerprintVerificationControl_OnComplete);
Everything goes well while i'm woriking with the application, i mean, when it has the focus, but, i have put it in the system tray using a notifyicon control and associating it with a contextmenu control to restore and close the app; so when it is in the system tray (is not the active application) i have no response from the fingerprint to manage the validation; the event of read the finger of the user does not fires.
My question is, what is the best way to manage that? Is it possible?. I found that i can do it if i make a windows service, other sites say that with Win32 API, others have examples but with keyboard events like presss key and so on. Any idea? any idea would be thank.
I have found that making a Windows Service is the best way to do stuff like that. However I don't any about the windows32 API cause I've been able to do everything I can without it.
As for the keyboard events, I have tried those and have found that they only work when the application has focus therefore, they are unusable. And services are just useful in general. You can try to talk to your application from your service via the local network so you don't have to rework the entire application.
I hope this is helpful.

How to Receive click location from another process with C#?

My C# application needs to receive a click position from another process, I then need to show on my app. But I don't know how I would implement it.
Could someone help me figure out how to do this?
Thanks so much
What you need is called a "Hook". Windows allows you to hook both the keyboard and mouse events. Basically windows works by injecting the appropriate movements and clicks of the mouse (and keys typed) into the application that has focus.
However using the hook, you receive all of the events, not just those relevant to your app. Once you have the hook established you can then do what you want with the information.
Note that you are going down to the windows OS and if you do the wrong thing here, you can leak the handles and you can also cause windows to get into a bad state.
There is a great tutorial here from MS Technet that describes how to do this in C#.

Send keys to WPF Browser control

Can I programatically send [UserID]{TAB}[Password]{CARRIAGE RETURN} to a webbrowser control which has a userID, password and Sign-in button there. I wanted to use my own virtual keyboard in my application. Any tips here?
Sorry for the late answer but I've just finished a similar project and as part of the work am in the process of open sourcing two projects to Codeplex.
The first is the Windows Input Simulator which is a simple .NET wrapper around the Win32 SendInput written in C#.
The second is a very customisable on screen keyboard or touch screen keyboard control and toolkit called WpfKB and will be available as an initial release tomorrow. Hope these are of help to you or anyone else who comes across the projects.
I recently had to implement automatic authentication through a WPF browser control, and I looked into simulating keystrokes. I didn't need a full virtual keyboard so interacting with the DOM of the login page through IHTMLDocument2 ended up being the best approach, but I looked into keystroke automation before making that decision and found a few options.
You can raise the appropriate routed events on the control as described in Simulating basic keyboard events and Simulating text input. I don't know of any specific problems with this approach but I opted against it simply because I wasn't comfortable simulating input without looking at how the CLR handles the actual input, and without at least raising the complete lifetime (PreviewKeyDown, KeyDown, PreviewKeyUp, KeyUp) I was wary of unintended consequences.
Take a look at WOSK on CodePlex. It's a good example of how to invoke Win32 keybd_event and SendInput functions to generate the low-level input messages via Managed Windows API to simulate input. There's some unnecessary fluff (eg transparency) and some odd WPF usage, such as using a CommandParameter with a Click event instead of a Command on the buttons, but the general approach is sane and it's reasonably complete.
You can also invoke the windows on-screen keyboard as alluded to by Jeroen. I didn't try this because I didn't need a virtual keyboard, but if you're going to call into Win32 anyway, you might as well follow the WOSK model and build the UI the way you want it.

Categories