I have a little C# console application that reads a key and checks to see if the key was a question mark:
ConsoleKeyInfo ki = System.Console.ReadKey();
if (ki.ConsoleKey.Oem2) // Do something
I arrived at Oem2 by seeing what value is actually assigned in the debugger, because there is no ConsoleKey code for question mark.
Now I could certainly use ki.KeyChar instead, but the application also needs to respond to certain keys (e.g. media keys) that do not map to characters. It feels inelegant to check both ConsoleKey and KeyChar to determine which key has in fact been pressed. On the other hand, it does not feel safe to rely on Oem2 to always map to ? in all circumstances and regions.
Is it best practice to check both properties to determine which key was in fact pressed?
Any insight into why ConsoleKeyInfo was designed this way is appreciated.
In this case, you will have to check KeyChar == '?'. From MSDN:
Oem2: The OEM 2 key (OEM specific).
So you're just getting lucky in that it happens to be a ? on your equipment.
The ConsoleKeyInfo structure provides KeyChar (a Char value) as well as Modifiers (an enumeration) to help you decide what keys the user had pressed.
I think you should consider what happens when someone has different keyboard layout.
If you want to check for “the key with question mark on my computer”, then use ConsoleKey. But that's probably not a good idea and you should probably adhere to the user's settings and use KeyChar.
But for keys that don't map to to characters (and the user can't remap them by using different keyboard layout), you have to use ConsoleKey.
So, yes, I think you should check both properties in this case.
I guess the reason for this design is that Console.ReadKey() relies on a native function (ReadConsoleInput) that returns an array of KEY_EVENT_RECORD structures in case of a keypress, where each key event has an ASCII/Unicode character representation and a virtual key code. Notice the VK_OEM_2 in my previous link - this is where the ConsoleKey.Oem2 value comes from.
Related
I saw some codes like this:
if (Input.GetKey(KeyCode.UpArrow))
and
if (Input.GetKey(KeyCode.UpArrow))
Wanted to know how do I put other keys, as in the case of the "a"
if (Input.GetKey(KeyCode.AArrow))
tried this code here however it gives an error, as I write?
When using KeyCode in comparison (like an if statement) you must call the specific keycode you want to compare the input to. Microsoft has documentation which provides the standard KeyCodes represented in System.Windows which most, if not all, external libraries call to.
So in this case you need to check for the A key specifically.
if (Input.GeyKey(KeyCode.A))
{
//code runs if and only if the input key is the A key
}
else if (Input.GetKey(KeyCode.Escape))
{
//code runs if and only if the input key is the ESC key.
}
Now if you want many possible inputs at a specific time you may not wish to use an if statement, but I would recommend trying this out first to see if you get better results.
I use the namespace Windows.Forms.Keys
I would like to be able to catch and use some special like characters like é,è,à,ç, but when the program fire the event KeyDown, the KeyEventArg just return me the value "D1" to "D9".
What could I do to get the real char associated to these keys ?
Short answer: use KeyPress instead of KeyDown.
KeyDown is really designed to work with the physical layout of the keyboard (well, the logical layout of the physica... forget it :D). This is very different from the character that given physical key represents.
On the other hand KeyPress is all about characters being input from the keyboard, rather than keys being pressed, really. Note how KeyPress supports features like AltGr + someKey and char repetition etc.
If you really need to use KeyDown/KeyUp, you'll have to emulate the way windows keyboard system works to determine the char to output (for example, if you're making a keyboard mapping screen for a game or something like that). You can use the ToAscii WinAPI method (https://msdn.microsoft.com/en-us/library/ms646316.aspx).
Apart from that, you still have to understand the meaning of the key combinations - for example, on my keyboard, if I press 1, I get +. If I press Shift+1, I get 1. If I press AltGr + 1, I get !. Which of those do you care about? Maybe Shift + 1 should be interpreted as 1 (what KeyPress does). Maybe it should be interpreted as Shift + 1 (the easiest, you already have that). And maybe it should be interpreted as Shift + +, the way it's usually used for hotkey bindings or keyboard mappings in games.
It should be pretty obvious by now that this is actually far from trivial. You need some mechanism to interpret the "raw" input data - and different interpretations make different sense for different initial conditions. You're basically asking for a mixed approach between the two obvious options - you're mixing virtual keys and "real" characters.
I've been having some trouble getting the correct code for incoming keystrokes on the console for shift+number characters. For example, using:
cki = Console.ReadKey(True)
Console.WriteLine("You pressed the '{0}' key.", cki.Key)
If I press shift+2, I'm hoping to get the ascii 64 (for the '#' character), but instead I get 50 (for the '2' character).
Now, I know you can get the modifiers for the key pressed, but that would mean I'd have to program all the special cases for keys like that, and that doesn't seem right.
I need this function, or something like unto it, because of its ability to read keys as they are pressed, without the need to press enter, otherwise I'd just use console.read. Surely I've missed something. Could anyone tell me what it is I've missed?
You're looking for the KeyChar property, which returns the actual character rather than the physical key pressed.
You may want to cast it to int.
It is pretty important to distinguish between keys and characters. The key is the same anywhere in the world, the one on the top row at the left. You can rely on that key always producing ConsoleKey.D2
The character is however very different, it greatly depends on the active keyboard layout. A Northern American user presses Shift+2. A French user presses AltGr+0. A German user presses AltGr+Q. A Spanish user presses AltGr+2. Etcetera.
If you care only about the key then use ConsoleKeyInfo.Key, you do so for all non-typing keys like the function keys for example. Perhaps the typical gaming WASD keys. If you care only about the character, like #, then use ConsoleKeyInfo.KeyChar.
I'm reading the value of a registry key:
Microsoft.Win32.RegistryKey key;
key = *someLongPathHere*;
and displaying the value to a label:
string a = (string)key.GetValue("");//a default value
label1.Text = a;
It displays:
{F241C880-6982-4CE5-8CF7-7085BA96DA5A}
which is mostly correct, except for the first underscore, which exists in the original value:
{_F241C880-6982-4CE5-8CF7-7085BA96DA5A}
Why does it happen? i.e the missing underscore?
Also, after reading the key, do I have to close the key or anything? How can I do it?
It's easy enough to test whether or not there is a systematic problem with string values containing underscores. You can simply create such a value in the registry editor and then read it into your C# program with GetValue(). When you do so you'll discover that the C# registry code doesn't lose underscores. So, there must be some other explanation for your problem.
My best guess is that your label component does not display the underscore. I'm not very familiar with the C# UI frameworks but that seems plausible. Try looking at the value of a under the debugger rather than in a label caption on your UI.
The other thing that comes to mind is that you have registry redirection because you have an x86 process running on x64, and your key is under a redirected key, HKLM\Software, for example. Perhaps if you look under the Wow6432Node you will see the underscore discrepancy.
As for managing the life of the key, the key is backed by an unmanged resource. Namely a Windows HKEY. The RegistryKey class implements IDisposable and so you should wrap your keys in a using statement.
I want to convert a string into a series of Keycodes, so that I can then send them via PostMessage to a control. I need to simulate actual keyboard input, and I'm wondering if a massive switch statement is the only way to convert a character into the correct keycode, or if there's a simpler method.
====
Got my solution - http://msdn.microsoft.com/en-us/library/ms646329(VS.85).aspx
VkKeyScan will return the correct keycode for any character.
(And yes, I wouldn't do this in general, but when doing automated testing, and making sure that keyboard presses are responded to correctly, it works reliably enough).
Raymond says this is a bad idea.
http://blogs.msdn.com/oldnewthing/archive/2005/05/30/423202.aspx
For A-z 1-9 you could try build the char into a keycode string string.Format("KEY_KEY_{0}", char.ToString()) then use Enum.Parse to extract the Enum value, but it's a bit of a cludge
Or look at How to convert uint keycode to Keys enum on expert sexchange, and just work around the tricky cases.
I agree a switch statement is kinda awful
Got my solution - http://msdn.microsoft.com/en-us/library/ms646329(VS.85).aspx
VkKeyScan will return the correct keycode for any character.
(And yes, I wouldn't do this in general, but when doing automated testing, and making sure that keyboard presses are responded to correctly, it works reliably enough).
A much more reliable wait to send a string of keystrokes to a window is to use the SendKeys class
System.Windows.Forms.SendKeys("This is a test");
System.Windows.Forms.SendKeys("This is sends CTRL+J ^j");
This will be more predictable and should save you some time.