I'm currently trying to send a character. Therefore I use the native methods GetKeyboardLayout and VkKeyScanExW located in user32.dll to get the virtual key code (and shift- and control-state) for the current keyboard-layout from the system. Afterwards I send this virtual keycode to the application using the native method SendInput from user32.dll.
Everything work's fine - except of the euro sign. When I pass this character as parameter to VkKeyScanExW it returns -1, which means not found. On my Keyboard it is located using Ctrl+Menu+E (German layout)
Now I assume this occurs because the euro sign is a unicode sign and not mapped in the ascii-layout. I read Sendinput also allows a unicode-mode using a hardware scancode. So I hope using the unicode mode of SendInput will solve my problems. But I guess my virtualkey code is not the hardware scan code as unicode range is wider. Where can I find a sample how to send a unicode character (e.g. €) via SendInput to another control/window. MSDN and pinvoke.net do not provide useful samples.
In the meantime I solved the problem using the unicode parameter of SendInput. Now I don't have to use VkKeyScan any more - I can pass the character itself.
private static void SendUnicodeChar(char input)
{
var inputStruct = new NativeWinApi.Input();
inputStruct.type = NativeWinApi.INPUT_KEYBOARD;
inputStruct.ki.wVk = 0;
inputStruct.ki.wScan = input;
inputStruct.ki.time = 0;
var flags = NativeWinApi.KEYEVENTF_UNICODE;
inputStruct.ki.dwFlags = flags;
inputStruct.ki.dwExtraInfo = NativeWinApi.GetMessageExtraInfo();
NativeWinApi.Input[] ip = { inputStruct };
NativeWinApi.SendInput(1, ip, Marshal.SizeOf(inputStruct));
}
Thanks to all for the help.
Related
I have a UWP app in Visual Studio 2017. I'm trying to make a multi-language on-screen keyboard.
Currently the English keystrokes are working fine, however any letter from other languages throws System.ArgumentException: 'Value does not fall within the expected range.'
Here is the code that sends the keystrokes:
public void SendKey(ushort keyCode)
{
List<InjectedInputKeyboardInfo> inputs = new List<InjectedInputKeyboardInfo>();
InjectedInputKeyboardInfo myinput = new InjectedInputKeyboardInfo();
myinput.VirtualKey = keyCode;
inputs.Add(myinput);
var injector = InputInjector.TryCreate();
WebViewDemo.Focus(FocusState.Keyboard);
injector.InjectKeyboardInput(inputs); // exception throws here
}
How would I inject letters from other languages?
The trick is that InputInjector isn't injecting text (characters), but actually it is injecting key strokes on the keyboard. That means the input will be not what the VirtualKey value contains as the name value, but what the given key represents on the keyboard the user is currently using.
For example in Czech language we use the top numeric row to write characters like "ě", "š" and so on. So when you press number 3 on the keyboard, Czech keyboard writes "š".
If I use your code with Number3 value:
SendKey( (ushort)VirtualKey.Number3 );
I get "š" as the output. The same thing holds for Japanese for example where VirtualKey.A will actually map to ”あ”.
That makes InputInjector for keyboard a bit inconvenient to use, because you cannot predict which language the user is actually using a which keyboard key mapping is taking place, but after reflection it makes sense it is implemented this way, because it is not injection of text, but simulation of actual keyboard keystrokes.
The answer given by Martin Zikmund is not true. You can send any unicode character.
InputInjector inputInjector = InputInjector.TryCreate();
var key = new InjectedInputKeyboardInfo();
key.ScanCode = (ushort)'Ä';
key.KeyOptions = InjectedInputKeyOptions.Unicode;
inputInjector.InjectKeyboardInput(new[] { key });
The InjectKeyboardInput method is using this function behind the scenes. Please note the that you require the inputInjectionBrokered capability in your app.
I have written a small program in C# 2010 which can split input from different keyboards by making an array of devices using, in part, the following:
--This code works fine for non-unified keyboards--
InputDevice id;
NumberOfKeyboards = id.EnumerateDevices();
id = new InputDevice( Handle );
id.KeyPressed += new InputDevice.DeviceEventHandler( m_KeyPressed );
private void m_KeyPressed( object sender, InputDevice.KeyControlEventArgs e ) {
lbDescription.Text = e.Keyboard.Name;
// e.Keyboard.* has many useful strings, none work for me anymore.
}
Very happy with this, I ran out and bought 4 Logitech K230 keyboards which use the Unifying receiver. Sadly, all the keyboard data is now multiplexed and shows up in my code as a single keyboard!
How can I identify which "unified" keyboard the input is coming from? Ideally in C#, but I suppose I am willing to look at other languages if solutions exist.
I don't have unifying keyboard, but check if you can see multiple keyboards in Windows devices. Then you could try this http://www.codeproject.com/Articles/17123/Using-Raw-Input-from-C-to-handle-multiple-keyboard and check output.
We recently came across some sample code from a vendor for hashing a secret key for a web service call, their sample was in VB.NET which we converted to C#. This caused the hashing to produce different input. It turns out the way they were generating the key for the encryption was by converting a char array to a string and back to a byte array. This led me to the discovery that VB.NET and C#'s default encoder work differently with some characters.
C#:
Console.Write(Encoding.Default.GetBytes(new char[] { (char)149 })[0]);
VB:
Dim b As Char() = {Chr(149)}
Console.WriteLine(Encoding.Default.GetBytes(b)(0))
The C# output is 63, while VB is the correct byte value of 149.
if you use any other value, like 145, etc, the output matches.
Walking through the debugging, both VB and C# default encoder is SBCSCodePageEncoding.
Does anyone know why this is?
I have corrected the sample code by directly initializing a byte array, which it should have been in the first place, but I still want to know why the encoder, which should not be language specific, appears to be just that.
If you use ChrW(149) you will get a different result- 63, the same as the C#.
Dim b As Char() = {ChrW(149)}
Console.WriteLine(Encoding.Default.GetBytes(b)(0))
Read the documentation to see the difference- that will explain the answer
The VB Chr function takes an argument in the range 0 to 255, and converts it to a character using the current default code page. It will throw an exception if you pass an argument outside this range.
ChrW will take a 16-bit value and return the corresponding System.Char value without using an encoding - hence will give the same result as the C# code you posted.
The approximate equivalent of your VB code in C# without using the VB Strings class (that's the class that contains Chr and ChrW) would be:
char[] chars = Encoding.Default.GetChars(new byte[] { 149 });
Console.Write(Encoding.Default.GetBytes(chars)[0]);
The default encoding is machine dependent as well as thread dependent because it uses the current codepage. You generally should use something like Encoding.UTF8 so that you don't have to worry about what happens when one machine is using unicode and another is using 1252-ANSI.
Different operating systems might use
different encodings as the default.
Therefore, data streamed from one
operating system to another might be
translated incorrectly. To ensure that
the encoded bytes are decoded
properly, your application should use
a Unicode encoding, that is,
UTF8Encoding, UnicodeEncoding, or
UTF32Encoding, with a preamble.
Another option is to use a
higher-level protocol to ensure that
the same format is used for encoding
and decoding.
from http://msdn.microsoft.com/en-us/library/system.text.encoding.default.aspx
can you check what each language produces when you explicitly encode using utf8?
I believe the equivalent in VB is ChrW(149).
So, this VB code...
Dim c As Char() = New Char() { Chr(149) }
'Dim c As Char() = New Char() { ChrW(149) }
Dim b As Byte() = System.Text.Encoding.Default.GetBytes(c)
Console.WriteLine("{0}", Convert.ToInt32(c(0)))
Console.WriteLine("{0}", CInt(b(0)))
produces the same output as this C# code...
var c = new char[] { (char)149 };
var b = System.Text.Encoding.Default.GetBytes(c);
Console.WriteLine("{0}", (int)c[0]);
Console.WriteLine("{0}", (int) b[0]);
I was asked this question by a friend, and it piqued my curiosity, and I've been unable to find a solution to it yet, so I'm hoping someone will know.
Is there any way to programatically detect what type of keyboard a user is using? My understanding of the keyboard is that the signal sent to the computer for 'A' on a DVORAK keyboard is the same as the signal sent to the computer for an 'A' in a QUERTY keyboard. However, I've read about ways to switch to/from dvorak, that highlight registry tweaking, but I'm hoping there is a machine setting or some other thing that I can query.
Any ideas?
You can do this by calling the GetKeyboardLayoutName() Win32 API method.
Dvorak keyboards have specific names. For example, the U.S. Dvorak layout has a name of 00010409.
Code snippet:
public class Program
{
const int KL_NAMELENGTH = 9;
[DllImport("user32.dll")]
private static extern long GetKeyboardLayoutName(
System.Text.StringBuilder pwszKLID);
static void Main(string[] args)
{
StringBuilder name = new StringBuilder(KL_NAMELENGTH);
GetKeyboardLayoutName(name);
Console.WriteLine(name);
}
}
that probably depends on the OS. I'm sure that there is an operatingsystem setting somewhere that registers the nationality of the keyboard. (Dvorak is considered a nationality because French keyboards are different from US keyboards are different from ...)
Also, just a side note: 'A' was a bad example, as 'A' happens to be the same key in dvorak and qwerty... B-)
You might be able to do it via DirectInput, or whatever the current DirectX-equivalent is. I type on a Dvorak keyboard, and about 50% of the games I buy detect my keyboard and reconfigure the default keymappings to support it (using ,aoe instead of wasd, for instance)
And yes, as Brian mentioned, 'A' is the same on both keyboards.
Why would it matter? Depending on some special implementation of a keyboard is no good idea at all. We use barcode scanners all over the place that emulate keyboard inputs. What would your program do with these devices? :)
PS: the mentioned registry entry arranges the keys of a regular keyboard into dvorak layout.
I have a couple of parameters, which need to be sent to a client app via TCP/IP.
For example:
//inside C++ program
int Temp = 10;
int maxTemp = 100;
float Pressure = 2.3;
Question: What is the best practice to format a string? I need to make sure that the whole string is received by the client and it should be easier at the client end to decode the string.
Basically, I want to know, what should be the format of the string, which I am going to send?
PS: Client app is in C# and the sender's app is in Qt (C++).
This is pretty subjective, but if it will always be as simple as described, then: keep it simple:
ASCII, space delimited, invariant (culture-independent) format integers in their fully expanded form (no E etc), CR as the end sentinel, so:
10 100 2
(with a CR at the end) This scales to any number of records, and will be easy to decode from just about any platform.
If it gets more nuanced: use a serializer built for the job, and just share details of what serialization format you are using.
Use ASCII, of the form paramName paramValue, space delimited, culture-independent format and use integers in their full form (no E notation) and a carriage return at the end, for example: T 10 P 100 mT 2 with CR at the end. In the other side, you can simply split the string by white spaces and note that even indices are parameters and odds indices are parameter values. Note that for every even parameter name index i then i+1 is its corresponding odd index parameter value. Also mind the CR at the end.