SpeechSynthesizer doesn't get all installed voices 3 - c#

I have added many voices using "Add language" under region and language. These appear under Text-to-speech in Speech. (I am using Windows 10)
I want to use these in my app with the SpeechSynthesizer class in System.Speech.Synthesis.
When listing the available voices in my application only a handful of those actually available are shown:
static void Main()
{
SpeechSynthesizer speech = new SpeechSynthesizer();
ReadOnlyCollection<InstalledVoice> voices = speech.GetInstalledVoices();
if (File.Exists("available_voices.txt"))
{
File.WriteAllText("available_voices.txt", string.Empty);
}
using (StreamWriter sw = File.AppendText("available_voices.txt"))
{
foreach (InstalledVoice voice in voices)
{
sw.WriteLine(voice.VoiceInfo.Name);
}
}
}
Looking in available_voices.txt only these voices are listed:
Microsoft David Desktop
Microsoft Hazel Desktop
Microsoft Zira Desktop
Microsoft Irina Desktop
But looking under Text-to-speech in the setttings there are many more, like Microsoft George and Microsoft Mark.
The accepted answer here:
SpeechSynthesizer doesn't get all installed voices
suggest changing the platform to x86. I tried this but i am not seeing any change.
This answer:
SpeechSynthesizer doesn't get all installed voices 2
suggest using .NET v4.5 because of a bug in System.Speech.Synthesis. I targeted .NET Framework 4.5 but i can still only retrieve 4 voices.
None of the answers in the questions i linked helped me solve my problem, so i am asking again. Any help is appretiated.

3 years passed after the original question was asked and API seems to contains the same issue, so here is a more "deep dive" answer.
TL;DR; Code example - at the bottom
The issue with the voice list is a weird design of the Microsoft Speech API - there are two sets of voices in Windows registered at different locations in registry - one is at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices, another one - at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech_OneCore\Voices.
The problem is that SpeechSynthesizer's (or more specifically - VoiceSynthesis's) initialization routine is nailed to the first one, meanwhile we usually need a combination of both.
So, there are actually two ways to overcome the behavior.
Option 1 (the one mentioned throughout other answers): manipulate the registry to physically copy the voice definition records from Speech_OneCore registry which make them visible to SpeechSynthesizer. Here you have plenty of options: manual registry manipulation, PowerShell script, code-based etc.
Option 2 (the one I used in my project): use reflection to put additional voices into the internal VoiceSyntesis's _installedVoices field, effectively simulating what Microsoft did in their code.
Good news are that the Speech API source code is open now, so we don't have to fumble in darkness trying to understand what we need to do.
Here's the original code snippet:
using (ObjectTokenCategory category = ObjectTokenCategory.Create(SAPICategories.Voices))
{
if (category != null)
{
// Build a list with all the voicesInfo
foreach (ObjectToken voiceToken in category.FindMatchingTokens(null, null))
{
if (voiceToken != null && voiceToken.Attributes != null)
{
voices.Add(new InstalledVoice(voiceSynthesizer, new VoiceInfo(voiceToken)));
}
}
}
}
We just need to replace SAPICategories.Voices constant with another registry entry path and repeat the whole recipe.
Bad news are that all the needed classes, methods and fields used here are internal so we'll have to extensively use reflection to instantiate classes, call methods and get/set fields.
Please find below the example of my implementation - you call the InjectOneCoreVoices extension method on the synthesizer and it does the job. Note, that it throws exception if something goes wrong so don't forget proper try/catch surroundings.
public static class SpeechApiReflectionHelper
{
private const string PROP_VOICE_SYNTHESIZER = "VoiceSynthesizer";
private const string FIELD_INSTALLED_VOICES = "_installedVoices";
private const string ONE_CORE_VOICES_REGISTRY = #"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech_OneCore\Voices";
private static readonly Type ObjectTokenCategoryType = typeof(SpeechSynthesizer).Assembly
.GetType("System.Speech.Internal.ObjectTokens.ObjectTokenCategory")!;
private static readonly Type VoiceInfoType = typeof(SpeechSynthesizer).Assembly
.GetType("System.Speech.Synthesis.VoiceInfo")!;
private static readonly Type InstalledVoiceType = typeof(SpeechSynthesizer).Assembly
.GetType("System.Speech.Synthesis.InstalledVoice")!;
public static void InjectOneCoreVoices(this SpeechSynthesizer synthesizer)
{
var voiceSynthesizer = GetProperty(synthesizer, PROP_VOICE_SYNTHESIZER);
if (voiceSynthesizer == null) throw new NotSupportedException($"Property not found: {PROP_VOICE_SYNTHESIZER}");
var installedVoices = GetField(voiceSynthesizer, FIELD_INSTALLED_VOICES) as IList;
if (installedVoices == null)
throw new NotSupportedException($"Field not found or null: {FIELD_INSTALLED_VOICES}");
if (ObjectTokenCategoryType
.GetMethod("Create", BindingFlags.Static | BindingFlags.NonPublic)?
.Invoke(null, new object?[] {ONE_CORE_VOICES_REGISTRY}) is not IDisposable otc)
throw new NotSupportedException($"Failed to call Create on {ObjectTokenCategoryType} instance");
using (otc)
{
if (ObjectTokenCategoryType
.GetMethod("FindMatchingTokens", BindingFlags.Instance | BindingFlags.NonPublic)?
.Invoke(otc, new object?[] {null, null}) is not IList tokens)
throw new NotSupportedException($"Failed to list matching tokens");
foreach (var token in tokens)
{
if (token == null || GetProperty(token, "Attributes") == null) continue;
var voiceInfo =
typeof(SpeechSynthesizer).Assembly
.CreateInstance(VoiceInfoType.FullName!, true,
BindingFlags.Instance | BindingFlags.NonPublic, null,
new object[] {token}, null, null);
if (voiceInfo == null)
throw new NotSupportedException($"Failed to instantiate {VoiceInfoType}");
var installedVoice =
typeof(SpeechSynthesizer).Assembly
.CreateInstance(InstalledVoiceType.FullName!, true,
BindingFlags.Instance | BindingFlags.NonPublic, null,
new object[] {voiceSynthesizer, voiceInfo}, null, null);
if (installedVoice == null)
throw new NotSupportedException($"Failed to instantiate {InstalledVoiceType}");
installedVoices.Add(installedVoice);
}
}
}
private static object? GetProperty(object target, string propName)
{
return target.GetType().GetProperty(propName, BindingFlags.Instance | BindingFlags.NonPublic)?.GetValue(target);
}
private static object? GetField(object target, string propName)
{
return target.GetType().GetField(propName, BindingFlags.Instance | BindingFlags.NonPublic)?.GetValue(target);
}
}

After trying about all published solutions, I solved it by editing the registry:
copying Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Speech_OneCore\Voices\Tokens\MSTTS_V110_heIL_Asaf
(where MSTTS_V110_heIL_Asaf is the registry folder of the voice I want to use in .NET, but don't appear in GetInstalledVoices())
to a registry address that looks the same but instead of Speech_OneCore it is just Speech.
technically, to copy the registry folder, i exported the original folder, then edited the .reg file to change Speech OneCore to Speech, and then applied that new .reg file.

I solved it by installing voices from another source and getting Microsoft Speech Platform - Runtime (Version 11)
The available voices can be found on microsofts website (click on the red download button and the voices should be listed)

Sorry if my answer comes so late after the subject was posted but I developed a small tool which allows to patch the installed voices to make them available for the .NET text-to-speech engine.
The tool copies selected items from the "HKLM\SOFTWARE\Microsoft\Speech_OneCore\Voices\Tokens" key to "HKLM\SOFTWARE\Microsoft\Speech\Voices\Tokens".
If you're interested : TTSVoicePatcher (it's freeware, FR/EN)
Due to the manipulation of keys in HKLM, the tool requires administrator rights to be launched.

The Microsoft Speech Platform - Runtime Languages (Version 11) on the Microsoft website seem to contain only languages that are already installed. Not the ones that can be found under Speech_OneCore.

Related

How to set the WebBrowser object in the .NET Framework to use whatever highest version of IE is installed on the users system

So the title says it all, I would like C# code (so please, PLEASE make sure it isn't Visual Basic code). And that is all I want to ask. I have tried the web browser built in to the .NET framework, but it looks like some old version of IE (if I am right or not). And if you answered, well thanks I guess! I need this for a small project where a bot would just log on to a website (its a base for future projects).
By default it's IE7. You can bang a registry entry in to make it later:
public static void EnsureBrowserEmulationEnabled(string exename = "YourAppName.exe", bool uninstall = false)
{
try
{
using (
var rk = Registry.CurrentUser.OpenSubKey(
#"SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATION", true)
)
{
if (!uninstall)
{
dynamic value = rk.GetValue(exename);
if (value == null)
rk.SetValue(exename, (uint)11001, RegistryValueKind.DWord);
}
else
rk.DeleteValue(exename);
}
}
catch
{
}
}
Code courtesy of this blog
The values you can use in place of 11001 can be found in MSDN
Alternatively; can you do what you want by using WebClient/HttpWebRequest rather than poking at a web browser control to navigate around? Or can you find some web service/api version of the site that will respond with JSON rather than trying to manipulate html?
I was mildly curious why you'd care what a page looks like if it's a bot that is using it, but perhaps you're hitting a "your IE is too old" from the server..

EncryptedXml DecryptDocument method error after .Net framework update

I have an old function written in 2013 that decrypt xml that was encrypted by another program.
The code is realy simple
public static void Decrypt(XmlDocument Doc)
{
// Check the arguments.
if (Doc == null)
throw new ArgumentNullException("Doc");
// Create a new EncryptedXml object.
EncryptedXml exml = new EncryptedXml(Doc);
// Decrypt the XML document.
exml.DecryptDocument();
}
It worked like a charm until recently that some of our clients started to upgrade their framework to 4.6.2, so the method DecryptDocument() stopped working. Now it throws an exception "The algorithm group '' is invalid". If I remove .net framework 4.6.2 it works again.
The sample code in this link will reproduce the error, it will encrypt successfully then fail to decrypt.
I'm using A3 certificates, pendrive token. Anyone have faced this problem? there is any work around in .net 4.6.2?
Edit 1:
Stacktrace:
at System.Security.Cryptography.CngAlgorithmGroup..ctor(String algorithmGroup)
at System.Security.Cryptography.CngKey.get_AlgorithmGroup()
at System.Security.Cryptography.RSACng..ctor(CngKey key)
at System.Security.Cryptography.X509Certificates.RSACertificateExtensions.GetRSAPrivateKey(X509Certificate2 certificate)
at System.Security.Cryptography.CngLightup.GetRSAPrivateKey(X509Certificate2 cert)
at System.Security.Cryptography.Xml.EncryptedXml.DecryptEncryptedKey(EncryptedKey encryptedKey)
at System.Security.Cryptography.Xml.EncryptedXml.GetDecryptionKey(EncryptedData encryptedData, String symmetricAlgorithmUri)
at System.Security.Cryptography.Xml.EncryptedXml.DecryptDocument()
at Criptografar.Program.Decrypt(XmlDocument Doc) in C:\Users\leoka\Documents\Visual Studio 2017\Projects\ConsoleApp4\Criptografar\Program.cs:line 152
at Criptografar.Program.Main(String[] args) in C:\Users\leoka\Documents\Visual Studio 2017\Projects\ConsoleApp4\Criptografar\Program.cs:line 83
I cannot reproduce the problem myself - I don't have the "pendrive token" which I suspect is the problem - so this is guesswork.
There are two generations of cryptographic APIs in Windows - the "old" one and the "new generation" one, known as CNG.
Now, if you look at the source code for the CngLightup type that appears midway through your stack trace, specifically the DetectRsaCngSupport method, you'll see that .NET framework tries to use the new generation API if possible. My guess is that the "pendrive token" device does not support the new API. You can verify this by forcing the use of the old API. Unfortunately, there does not seem to be a public configuration flag that controls this, so you must resort to reflection-based hacks. For example, you can put something like this at the beginning of your program, so that it runs once, before you try the decrypting operation:
var cngLightupType = typeof(EncryptedXml).Assembly.GetType("System.Security.Cryptography.CngLightup");
var preferRsaCngField = cngLightupType.GetField("s_preferRsaCng", BindingFlags.Static | BindingFlags.NonPublic);
var getRsaPublicKeyField = cngLightupType.GetField("s_getRsaPublicKey", BindingFlags.Static | BindingFlags.NonPublic);
var getRsaPrivateKeyField = cngLightupType.GetField("s_getRsaPrivateKey", BindingFlags.Static | BindingFlags.NonPublic);
preferRsaCngField.SetValue(null, new Lazy<bool>(() => false));
getRsaPublicKeyField.SetValue(null, null);
getRsaPrivateKeyField.SetValue(null, null);
Do note that it is extremely hacky, not thread-safe, error handling is omitted etc. If you verify that the CNG usage is the problem, you can then ask the "pendrive token" supplier to provide drivers that work with CNG. Or you can live with the hack above, rewritten for more safety.
There are some runtime changes in .Net 4.6.2 that affect EncrtyptedXml - see https://msdn.microsoft.com/en-us/library/mt670901(v=vs.110).aspx#Anchor_5
I ran into something very similar today that turned out to be a bug in .NET 4.6.2:
https://github.com/Microsoft/dotnet/issues/341
According to this issue, there are two workarounds:
1) Upgrading the OS to Windows Server 2012R2 or newer, 2) loading the
user profile.

How do I use a lexicon with SpeechSynthesizer?

I'm performing some text-to-speech and I'd like to specify some special pronunciations in a lexicon file. I have ran MSDN's AddLexicon example verbatim, and it speaks the sentence but it does not use the given lexicon, something appears to be broken.
Here's the provided example:
using System;
using Microsoft.Speech.Synthesis;
namespace SampleSynthesis
{
class Program
{
static void Main(string[] args)
{
// Initialize a new instance of the SpeechSynthesizer.
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
// Configure the audio output.
synth.SetOutputToDefaultAudioDevice();
PromptBuilder builder = new PromptBuilder();
builder.AppendText("Gimme the whatchamacallit.");
// Append the lexicon file.
synth.AddLexicon(new Uri("c:\\test\\whatchamacallit.pls"), "application/pls+xml");
// Speak the prompt and play back the output file.
synth.Speak(builder);
}
Console.WriteLine();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
}
}
and lexicon file:
<lexicon version="1.0"
xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd"
alphabet="x-microsoft-ups" xml:lang="en-US">
<lexeme>
<grapheme> whatchamacallit </grapheme>
<phoneme> W S1 AX T CH AX M AX K S2 AA L IH T </phoneme>
</lexeme>
</lexicon>
The console opens, the text is spoken, but the new pronunciation isn't used. I have of course saved the file to c:\test\whatchamacallit.pls as specified.
I've tried variations of the Uri and file location (e.g. #"C:\Temp\whatchamacallit.pls", #"file:///c:\test\whatchamacallit.pls"), absolute and relative paths, copying it into the build folder, etc.
I ran Process Monitor and the file is not accessed. If it were a directory/file permission problem (which it isn't) I would still see the access denied messages, however I log no reference at all except the occasional one from my text editor. I do see the file accessed when I try File.OpenRead.
Unfortunately there are no error messages when using a garbage Uri.
On further investigation I realized this example is from Microsoft.Speech.Synthesis, whereas I'm using System.Speech.Synthesis over here. However from what I can tell they are identical except for some additional info and examples and both point to the same specification. Could this still be the problem?
I verified the project is set to use the proper .NET Framework 4.
I compared the example from MSDN to examples from the referenced spec, as well as trying those outright but it hasn't helped. Considering the file doesn't seem to be accessed I'm not surprised.
(I am able to use PromptBuilder.AppendTextWithPronunciation just fine but it's a poor alternative for my use case.)
Is the example on MSDN broken? How do I use a lexicon with SpeechSynthesizer?
After a lot of research and pitfalls I can assure you that your assumption is just plain wrong.
For some reason System.Speech.Synthesis.SpeechSynthesizer.AddLexicon() adds the lexicon to an internal list, but doesn't use it at all.
Seems like nobody tried using it before and this bug went unnoticed.
Microsoft.Speech.Synthesis.SpeechSynthesizer.AddLexicon() (which belongs to the Microsoft Speech SDK) on the other hand works as expected (it passes the lexicon on to the COM object which interprets it as advertised).
Please refer to this guide on how to install the SDK: http://msdn.microsoft.com/en-us/library/hh362873%28v=office.14%29.aspx
Notes:
people reported the 64-bit version to cause COM exceptions (because the library does not get installed correctly), I confirmed this on a 64bit Windows 7 machine
using the x86 version circumvents the problem
be sure to install the runtime before the SDK
be sure to also install a runtime language (as adviced on the linked page) as the SDK does not use the default system speech engine
You can use System.Speech.Synthesis.SpeechSynthesizer.SpeakSsml() instead of a lexicon.
This code changes pronunciation of "blue" to "yellow" and "dog" to "fish".
SpeechSynthesizer synth = new SpeechSynthesizer();
string text = "This is a blue dog";
Dictionary<string, string> phonemeDictionary = new Dictionary<string, string> { { "blue", "jelow" }, { "dog", "fyʃ" } };
foreach (var element in phonemeDictionary)
{
text = text.Replace(element.Key, "<phoneme ph=\"" + element.Value + "\">" + element.Key + "</phoneme>");
}
text = "<speak version=\"1.0\" xmlns=\"http://www.w3.org/2001/10/synthesis\" xml:lang=\"en-US\">" + text + "</speak>";
synth.SpeakSsml(text);
I've been looking into this a little recently on Windows 10.
There are two things I've discovered with System.Speech.Synthesis.
Any Voice you use, must be matched against the language in the Lexicon file.
Inside the lexicon you have the language:
<lexicon version="1.0"
xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
alphabet="x-microsoft-ups" xml:lang="en-US">
I find that I can name my Lexicon as "blue.en-US.pls" and make a copy with "blue.en-GB.pls". Inside it will have xml:lang="en-GB"
In the code you'd use:
string langFile = Path.Combine(_appPath, $"blue.{synth.Voice.Culture.IetfLanguageTag}.pls");
synth.AddLexicon(new Uri(langFile), "application/pls+xml");
The other thing I discovered is, it doesn't work with "Microsoft Zira Desktop - English (United States)" at all. I don't know why.
This appears to be the default voice on Windows 10.
Access and change your default voice here:
%windir%\system32\speech\SpeechUX\SAPI.cpl
Otherwise you should be able to set it via code:
var voices = synth.GetInstalledVoices();
// US: David, Zira. UK: Hazel.
var voice = voices.First(v => v.VoiceInfo.Name.Contains("David"));
synth.SelectVoice(voice.VoiceInfo.Name);
I have David (United States) and Hazel (United Kingdom), and it works fine with either of those.
This appears to be directly related to whether the voice token in the registry has the SpLexicon key value. The Microsoft Zira Desktop voice does not have this registry value.
While Microsoft David Desktop voice has the following:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices\Tokens\TTS_MS_EN-US_DAVID_11.0\Attributes\SpLexicon = {0655E396-25D0-11D3-9C26-00C04F8EF87C}

Multilanguage Support In C#

I've developed a sample software in c# windows Appliation. How to make it a multilingual supporting software.
For Example: One of the message boxes display " Welcome to sample application"
i installed the software in a chinees os , but it displays the message in english only.
i'm using "string table" (Resource File) for this problem.
In string table i need to create entry for each messages in english and Chinees.
its a timely process. is there any other way to do this?
Create Resources files for each language you want to give support for mentioned below.
alt text http://geekswithblogs.net/images/geekswithblogs_net/dotNETPlayground/resx.gif
Based on the language/currentculture of the user, read values from respective Language Resource file and display in label or MessageBox. Here's some sample code:
public static class Translate
{
public static string GetLanguage()
{
return HttpContext.Current.Request.UserLanguages[0];
}
public static string Message(string key)
{
ResourceManager resMan = null;
if (HttpContext.Current.Cache["resMan" + Global.GetLanguage()] == null)
{
resMan = Language.GetResourceManager(Global.GetLanguage());
if (resMan != null) HttpContext.Current.Cache["resMan" + Global.GetLanguage()] = resMan;
}
else
resMan = (ResourceManager)HttpContext.Current.Cache["resMan" + Global.GetLanguage()];
if (resMan == null) return key;
string originalKey = key;
key = Regex.Replace(key, "[ ./]", "_");
try
{
string value = resMan.GetString(key);
if (value != null) return value;
return originalKey;
}
catch (MissingManifestResourceException)
{
try
{
return HttpContext.GetGlobalResourceObject("en_au", key).ToString();
}
catch (MissingManifestResourceException mmre)
{
throw new System.IO.FileNotFoundException("Could not locate the en_au.resx resource file. This is the default language pack, and needs to exist within the Resources project.", mmre);
}
catch (NullReferenceException)
{
return originalKey;
}
}
catch (NullReferenceException)
{
return originalKey;
}
}
}
In asn asp.net application, you'd use it as following:
<span class="label">User:</span>
You now would put:
<span class="label"><%=Translate.Message("User") %>:</span>
If you were going to use resource files as Ram suggested, there is a good blog post about localisation
here: ASP.NET MVC 2 Localization complete guide. (I should have mentioned that this is for Asp.net mvc 2, it may or may not be useful) You still have to spend time making tables for each language. I have not used any other approach for this before, hope you find something useful
You can do it using resource files. You need to create resource file for each language and you can use the appropriate one while running the application.
Resharper 5.0 can greatly improve the time you spend on localization. It has features that allows easy move to resource and it underlines (if chosen so) all strings that are localizable so it's harder to miss them.
Given that it has 30 days trial (full version) you can simply install it, do your job and uninstall if you can't afford it, but i would suggest to keep it :-) It's really worth it's price.
Software localization and globalization have always been tough and at times unwanted tasks for developers. ReSharper 5 greatly simplifies working with resources by providing a full stack of features for resx files and resource usages in C# and VB.NET code, as well as in ASP.NET and XAML markup.
Dedicated features include Move string to resource, Find usages of resource and other navigation actions. Combined with refactoring support, inspections and fixes, you get a convenient localization environment.

Extracting keyboard layouts from windows

OK, this is a slightly weird question.
We have a touch-screen application (i.e., no keyboard). When users need to enter text, the application shows virtual keyboard - hand-built in WinForms.
Making these things by hand for each new language is monkey work. I figure that windows must have this keyboard layout information hiding somewhere in some dll. Would there be anyway to get this information out of windows?
Other ideas welcome (I figure at least generating the thing from a xml file has got to be better than doing it by hand in VS).
(Note: having said all which, I note that there is a Japanese keyboard, state machine and all..., so XML might not be sufficient)
UPDATE: pretty good series on this subject (I believe) here
Microsoft Keyboard Layout Creator can load system keyboards and export them as .klc files. Since it’s written in .NET you can use Reflector to see how it does that, and use reflection to drive it. Here's a zip file of .klc files for the 187 keyboards in Windows 8 created using the below C# code. Note that I originally wrote this for Windows XP, and now with Windows 8 and the on-screen keyboard, it is really slow and seems to crash the taskbar :/ However, it does work :)
using System;
using System.Collections;
using System.IO;
using System.Reflection;
class KeyboardExtractor {
static Object InvokeNonPublicStaticMethod(Type t, String name,
Object[] args)
{
return t.GetMethod(name, BindingFlags.Static | BindingFlags.NonPublic)
.Invoke(null, args);
}
static void InvokeNonPublicInstanceMethod(Object o, String name,
Object[] args)
{
o.GetType().GetMethod(name, BindingFlags.Instance |
BindingFlags.NonPublic) .Invoke(o, args);
}
static Object GetNonPublicProperty(Object o, String propertyName) {
return o.GetType().GetField(propertyName,
BindingFlags.Instance | BindingFlags.NonPublic)
.GetValue(o);
}
static void SetNonPublicField(Object o, String propertyName, Object v) {
o.GetType().GetField(propertyName,
BindingFlags.Instance | BindingFlags.NonPublic)
.SetValue(o, v);
}
[STAThread] public static void Main() {
System.Console.WriteLine("Keyboard Extractor...");
KeyboardExtractor ke = new KeyboardExtractor();
ke.extractAll();
System.Console.WriteLine("Done.");
}
Assembly msklcAssembly;
Type utilitiesType;
Type keyboardType;
String baseDirectory;
public KeyboardExtractor() {
msklcAssembly = Assembly.LoadFile("C:\\Program Files\\Microsoft Keyboard Layout Creator 1.4\\MSKLC.exe");
utilitiesType = msklcAssembly.GetType("Microsoft.Globalization.Tools.KeyboardLayoutCreator.Utilities");
keyboardType = msklcAssembly.GetType("Microsoft.Globalization.Tools.KeyboardLayoutCreator.Keyboard");
baseDirectory = Directory.GetCurrentDirectory();
}
public void extractAll() {
DateTime startTime = DateTime.UtcNow;
SortedList keyboards = (SortedList)InvokeNonPublicStaticMethod(
utilitiesType, "KeyboardsOnMachine", new Object[] {false});
DateTime loopStartTime = DateTime.UtcNow;
int i = 0;
foreach (DictionaryEntry e in keyboards) {
i += 1;
Object k = e.Value;
String name = (String)GetNonPublicProperty(k, "m_stLayoutName");
String layoutHexString = ((UInt32)GetNonPublicProperty(k, "m_hkl"))
.ToString("X");
TimeSpan elapsed = DateTime.UtcNow - loopStartTime;
Double ticksRemaining = ((Double)elapsed.Ticks * keyboards.Count)
/ i - elapsed.Ticks;
TimeSpan remaining = new TimeSpan((Int64)ticksRemaining);
String msgTimeRemaining = "";
if (i > 1) {
// Trim milliseconds
remaining = new TimeSpan(remaining.Hours, remaining.Minutes,
remaining.Seconds);
msgTimeRemaining = String.Format(", about {0} remaining",
remaining);
}
System.Console.WriteLine(
"Saving {0} {1}, keyboard {2} of {3}{4}",
layoutHexString, name, i, keyboards.Count,
msgTimeRemaining);
SaveKeyboard(name, layoutHexString);
}
System.Console.WriteLine("{0} elapsed", DateTime.UtcNow - startTime);
}
private void SaveKeyboard(String name, String layoutHexString) {
Object k = keyboardType.GetConstructors(
BindingFlags.Instance | BindingFlags.NonPublic)[0]
.Invoke(new Object[] {
new String[] {"", layoutHexString},
false});
SetNonPublicField(k, "m_fSeenOrHeardAboutPropertiesDialog", true);
SetNonPublicField(k, "m_stKeyboardTextFileName",
String.Format("{0}\\{1} {2}.klc",
baseDirectory, layoutHexString, name));
InvokeNonPublicInstanceMethod(k, "mnuFileSave_Click",
new Object[] {new Object(), new EventArgs()});
((IDisposable)k).Dispose();
}
}
Basically, it gets a list of all the keyboards on the system, then for each one, loads it in MSKLC, sets the "Save As" filename, lies about whether it's already configured the custom keyboard properties, and then simulates a click on the File -> Save menu item.
It is a fairly well-known fact that MSKLC is unable to faithfully import & reproduce keyboard layouts for all of the .DLL files supplied by Windows–especially those in Windows 8 & above. And it doesn't do any good to know where those files are if you can't extract any meaningful or helpful information from them.
This is documented by Michael Kaplan on his blog (he was a developer of MSKLC) which I see you have linked to above.
When MSKLC encounters anything it does not understand, that portion is removed.
Extracting the layout using MSKLC will work for most keyboards, but there are a few–namely the Cherokee keyboard, and the Japanese & Korean keyboards (to name a few, I'm not sure how many more there are)–for which the extracted layout will NOT accurately or completely reflect the actual usage & features of the keyboard.
The Cherokee keyboard has chained dead keys which MSKLC doesn't support. And the far Eastern keyboards have modifier keys which MSKLC isn't aware of–that means entire layers/shift states which are missing!
Michael Kaplan supplies some code and unlocks some of the secrets of MSLKC and the accompanying software that can be used to get around some of these limitations but it requires a fair amount of doing things by hand–exactly what you're trying to avoid! Plus, Michael's objectives are aimed at creating keyboards with features that MSKLC can not create or understand, but which DO work in Windows (which is the opposite of what the OP is trying to accomplish).
I am sure that my solution comes too late to be of use to the OP, but perhaps it will be helpful in the future to someone in a similar situation. That is my hope and reason for posting this.
So far all I've done is explain that the other answers are insufficient. Even the best one will not and can not fully & accurately reproduce all of Windows' native keyboards and render them into KLC source files. This is really unfortunate and it is certainly not the fault of its author because that is a very clever piece of code/script! Thankfully the script & the source files (whose link may or may not still work) is useful & effective for the majority of Windows' keyboards, as well as any custom keyboards created by MSKLC.
The keyboards that have the advanced features that MSKLC doesn't support were created by the Windows DDK, but those features are not officially documented. Although one can learn quite a bit about their potential by studying the source files provided with MSKLC.
Sadly the only solution I can offer is 3rd party, paid software called KbdEdit. I believe it is the only currently available solution which is actually able to faithfully decode & recreate any of the Windows supplied keyboards–although there are a few advanced features which even it can not reproduce (such as keys combinations/hotkeys which perform special native language functions; for example: Ctrl+CapsLock to activate KanaLock (a Japanese modifier layer). KbdEdit DOES faithfully reproduce that modifier layer which MSKLC with strip away, it just doesn't support this alternate method of activating that shift state if you don't have a Japanese keyboard with a Kana lock key. Although, it will allow you to convert a key on your keyboard to a Kana key (perhaps Scroll Lock?).
Fortunately, none of those unsupported features are even applicable to an on-screen keyboard.
KbdEdit is a really powerful & amazing tool, and it has been worth every penny I paid for it! (And that's NOT something I would say about virtually any other paid software…)
Even though KbdEdit is 3rd party software, it is only needed to create the keyboards, not to use them. All of the keyboards it creates work natively on any Windows system without KbdEdit being installed.
It supports up to 15 modifier states and three addition modifier keys, one which is togglable–like CapsLock. It also supports chained dead keys, and remapping any of the keys on most any keyboard.
Why don't you use the on-screen keyboard (osk.exe)? Looks like you re-inventing the wheel. And not the easiest one!
I know where are these DLL files' path:
In your registry, you see:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layouts
where each branch has some value like "Layout File"="KBDSP.dll". The root directory is
C:\Windows\System32
and
C:\Windows\SystemWOW64
Those are all the keyboard layout files are located. For example, KBDUS.dll means "keyboard for US".
I tried to substitute the DLL file with my custom DLL made by MSKLC, and I found it loads the layout mapping images automatically in the "Language" - "input method" - "preview":
So we know that the mapping is there in the DLL.
Please check following Windows API
[DllImport("user32.dll")]
private static extern long LoadKeyboardLayout(string pwszKLID, uint Flags);
Check MSDN here

Categories