Converting WPF KeyDown events to a string - c#

I have an application that can receive key strokes from a barcode scanner (which to Windows just looks like a USB keyboard).
My requirement is for the user to be able to use the barcode scanner anywhere in the application (any window, any tab) and have it react accordingly. I already have a version of this working by monitoring for PreviewTextInput events in my App.xaml.cs and firing off my own custom BarcodeArrived event. The problem with this is that if the user has put focus on a control that does not accept Text Input, then PreviewTextInput never fires.
PreviewKeyDown does always fire, but the data it presents is ugly and I can't seem to find anyway of translating KeyDown events to a normalized string. I found a Stack Overflow article at "Convert received keys in PreviewKeyDown to a string" seemed promising, but it does not appear that I can wire this in my App.xaml.cs (no dependency object).
Any thoughts or suggestions would be great. Or, as an alternative, at least a way of detecting which keyboard the input is coming from, at which point using PreviewKeyDown might be a viable option because I can assume a "dumbed down" input stream.

Related

C# change event order

Edit*
ok sorry for not giving the real scenario,
actually i have datagridview created programatically, i put 2 event, which is mouseclick and ColumnHeaderMouseClick.
currently whenever user click on column header it will trigger mouseclick first then followed by ColumnHeaderMouseClick
can i change the order of the trigger? or can i know when user click i can check whether he click on column header or other place in "mouseclick event"?
I don't know of any way to do that. The events occur according to the events that are actually taking place. In other words, that would be like saying can you walk through a door without opening it first. It doesn't really make sense in the context.
Could you possibly switch the code you are calling for down and click? Or just use click and execute the events in whatever order you like there?
You cannot do this, it does not make logical sense. The MouseDown event necessarily occurs prior to the MouseClick event because the mouse button had to go down in order to initiate a click. When the mouse button goes down, a MouseDown event is raised. The MouseClick event cannot be raised until some time after that.
The order of the mouse events is explicitly documented on MSDN and cannot be modified.
And you say that this was just an example, that you are working on other events. Unfortunately, the answer will be the same regardless. The order in which events are raised is something decided by the programmer who wrote the code that raises those events. There is no mechanism for you, the consumer of the library, to change that order.
Like the mouse events discussed above, the MSDN documentation lists the order of all significant events raised by the WinForms library.
Of course, if you are writing the code that raises the events, you can always modify it to raise them in whatever order you want. But I suspect that much is obvious and not why you are asking this question.
Most C# implementations will typically fire off the event handlers in the order that they are registered to each event. However this is not enforced!
It could change at any time and as such you should not create a dependency on the order of event handlers in your application.
I strongly suggest you take a step back and review your application design for ways to remove this dependency.
As many have suggested on their answers trying to change the event order is not recommended for multiple reasons.
In order to achieve what you want, you would have to re-implement the Windows Forms controls of the .NET Framework. You can also try to hijack the Windows message processing by overriding the WndProc procedure and process directly the messages provided by the OS, but you will learn that the event firing order follows the order in which the OS sends the input provided by the device (mouse) to the affected windows/controls.
i got the solution already, i get the index of the row, if -1 its mean it is a header so i can do the if else already, no need for different event, enough with cell-click event alone can handle both scenario.
sorry for asking in complicated way when the solution is so simple, thanks for helping.

Is it possible to create a touch application to interact with another application, "sharing" focus betwen the two?

What I am trying to do is have a helper application that a user can use touch input to affect a second application. I have been able to send keystrokes to the second application, but the problem I am having is when I want to hold a button down.
For example on my application, I want to be able to hold down a button which would simulate a ctrl key down. And while this button is touched, I want to be able to interact with the second application. And if the user lets go of the button, then the ctrl key is undressed. I can kind of get this working, except when the user does anything on the second application, the button that was held down is unpressed (because the other application gained focus).
I don't care if I have to go WPF or windows forms, just as long as I can get it working. Windows 8 or 8.1 only is acceptable as well (all clients will be 8.1).
Any help would be appreciated!
Note I added to a comment below.
The second application is one I haven't created, it could be anything really. A scenario would be my application having a ctrl button that you could hold press and hold, for example, and in outlook click a link. Or pressing and holding a shift button in my app, while drawing with a pen in photoshop to draw a straight line. I am able to send key strokes, but just can't handle the "hold" touch command.
Since it's been so long, I'm creating a new answer. I did the research, and I'm pretty sure I know what's going on. But I'm going to mention all the official resources I examined before coming to my conclusion.
Possible packaged solutions
First off, the new Windows Input Simulator might fix all your troubles right out of the box. If you need the Windows API, which I'll be talking about below, check PInvoke.net first to see if they have documentation for the call you're trying to make.
The Windows API way
The best place to start is the User Interaction article on MSDN. There's a bunch of new Winu8 Touch APIs there, but you're probably interest in the legacy Keyboard input article.
Every window for an application must have a Windows Procedure (a.k.a WindowsProc) that's responsible for reacting to messages it cares about (e.g. a button click, a message indicating the Window needs to draw its GUI, or the WM_QUIT event that alerts it to gracefully dispose of the resources held by the Window. This procedure is also responsible for handling messages from input devices, like mouse-clicks and keys on the keyboard.
In your case, you're more interested in making the Window think there's a message from the keyboard when there isn't. That's what the SendInput API call is for; it lets you insert an array of INPUT messages, be they keyboard, mouse, or other input device directly into the queue, bypassing the need for the user to physically act. This easy API call specifically accepts MOUSEINPUT, KEYBDINPUT, or HARDWAREINPUT messages.
For the keyboard, you'll get a message when a key is pressed (WM_KEYDOWN) and when it is released (WM_KEYUP), so to determine hotkeys like CTRL+C, you have to watch for WM_KEYDOWN message for the letter C that were received after a WM_KEYDOWN for the CTRL key but before its WM_KEYUP message.
Managing input device messages
To simulate input devices, use SendInput to pass along the WM_KEYDOWN and/or WM_KEYUP message(s) to the target Window. But don't forget that an application can have more than one window. There are API calls to get the different Windows, but it'll be up to you to write code to find it before you can use SendInput on it.a
To find out what a window believes about an input device, use GetAsyncKeyState. You may not be able to trust it if you've meddled with APIs related to input devices.
There is a BlockInput call on a window which denies all messages except SendInput calls from the thread which blocked it. In most cases, re-enabling input as soon as possible is the right thing. The documentation say that if the blocking thread dies, BlockInput is disabled. A similar but less harsh call is EnableWindow which prevents a window from receiving input focus.
The API for windows includes the ability to register hooks, which let you specify kinds of messages and/or certain windows to be reviewed by a user-specified function.
I would really like to know why you need this to be in two different applications, but here's the best I can think of.
In the applications, you should be able to subscribe to KeyDown, KeyUp, Focus, and Blur (lost focus). I'm not clear on if this is an actual button or if its touch input, but whatever the case may be, assume KeyDown is whatever event fires when the user is "simulating" the ctrl key being pressed, and KeyUp is whatever event fires when the user is ceases to "simulate" the ctrl key being down.
Set up the App1 so when it gains focus, it communicates with the App2 the state: depressed, or not depressed. Every time KeyDown or KeyUp fires, send a message to App2.
When App1's Blur event fires, stop sending messages to App2. Even though App1 will no longer have the button depressed, App2 won't know it and can continue to behave as though the button was depressed until App2 regains focus and can go back to sending messages again.
If it were me, I would have App2 have all the same logic as App1, so the moment App2 gets in Focus, it begins handling the up/down state itself. You may want to have the two applications do some kind of "handshake" when a blur/focus event happens to make sure the state is preserved when switching between. When App2 gets the Blur event, it transfers to App1 the state and they shake hands again, so App1 knows its now responsible for managing the state.
This is basically having the apps cooperate via "tag-team." They keep some state synchronized between each other, "handing off" the responsibility when the blur/focus events fire. Since you cannot know that Blur will fire on one app before Focus fires on the other, you will need to use the same mechanism that communicates the state of this "simulated button" to coordinate the apps so they never interfere with each other.
Something tells me that this doesn't completely solve your problem, but hearing why it doesn't will certainly get everyone closer to thinking out the rest of the way. Let me know the twist ending, eh?

How can I capture a key press, outside of the form?

I have been trying to capture the keys pressed outside of my winform, but obviously a KeyPress event won't work.
I haven't been able to get any closer than the KeyPress event, which only works on the form level, as specified
I suspect that I will have to do the
[DllImportAttribute("user32.dll")]
, but I have little to no experience with that.
Being able to capture key presses anywhere requires using Hooks.
There is a library on CodePlex which simplifies implementing Application and Global Mouse and Keyboard Hooks for C# users.

Send keys to WPF Browser control

Can I programatically send [UserID]{TAB}[Password]{CARRIAGE RETURN} to a webbrowser control which has a userID, password and Sign-in button there. I wanted to use my own virtual keyboard in my application. Any tips here?
Sorry for the late answer but I've just finished a similar project and as part of the work am in the process of open sourcing two projects to Codeplex.
The first is the Windows Input Simulator which is a simple .NET wrapper around the Win32 SendInput written in C#.
The second is a very customisable on screen keyboard or touch screen keyboard control and toolkit called WpfKB and will be available as an initial release tomorrow. Hope these are of help to you or anyone else who comes across the projects.
I recently had to implement automatic authentication through a WPF browser control, and I looked into simulating keystrokes. I didn't need a full virtual keyboard so interacting with the DOM of the login page through IHTMLDocument2 ended up being the best approach, but I looked into keystroke automation before making that decision and found a few options.
You can raise the appropriate routed events on the control as described in Simulating basic keyboard events and Simulating text input. I don't know of any specific problems with this approach but I opted against it simply because I wasn't comfortable simulating input without looking at how the CLR handles the actual input, and without at least raising the complete lifetime (PreviewKeyDown, KeyDown, PreviewKeyUp, KeyUp) I was wary of unintended consequences.
Take a look at WOSK on CodePlex. It's a good example of how to invoke Win32 keybd_event and SendInput functions to generate the low-level input messages via Managed Windows API to simulate input. There's some unnecessary fluff (eg transparency) and some odd WPF usage, such as using a CommandParameter with a Click event instead of a Command on the buttons, but the general approach is sane and it's reasonably complete.
You can also invoke the windows on-screen keyboard as alluded to by Jeroen. I didn't try this because I didn't need a virtual keyboard, but if you're going to call into Win32 anyway, you might as well follow the WOSK model and build the UI the way you want it.

Detecting mousewheel over non-focused window?

My goal is to make a floating toolbar (as its own C# application), and when the user uses the scrollwheel over me I want to change the buttons that are visible. Sounds easy enough, should just be a matter of this one-liner:
MouseWheel += new MouseEventHandler(Form1_MouseWheel);
The problem I am having is that the mouse wheel handler is only invoked when my application has focus. That means the user has to first click, and then mousewheel. That won't do for what I'm trying to do.
I can hook the MouseHover event handler and call form.Activate() then, to get focus. That's suboptimal because if the user uses the scrollwheel immediately after mousing over my application (instead of waiting a little), the focus will still be on the previous app and it'll get the mousewheel event.
A natural thing to do would be to hook the MouseEnter event and call Activate() there, but then instead of my application coming to the front, its icon starts to blink on the task bar. I'm using Win7, but this problem is probably older than this.
Ideally, what I'd like to do would be to detect the mousewheel events without having to worry about whether my application has focus. It would really be better for the previous application to keep input focus, so for example if the user's in Notepad they can type, mouse over to my app, use the scroll wheel, look at what they see and decide to just resume typing in Notepad. Ideally I don't want them to have to click even once in this scenario.
I'll settle for a solution that switches focus to my application, though, if there's no other way.
What I have so far uses C# and Windows Forms, but I'd be open to using something different if that can solve my problems.
So: how can I see those mousewheel events without the user having to click to focus my application first?
If you need to catch mouse events outside your application, you can use a global system hook. There's a good .NET implementation here

Categories