NullReferenceException while creating a SpeechRecognitionEngine object - c#

I'm currently trying to implement some speech recognition in one of my c# project, and so I found this library :
https://learn.microsoft.com/en-us/dotnet/api/system.speech.recognition.speechrecognitionengine?view=netframework-4.8
Providing this code as an example :
using System;
using System.Speech.Recognition;
namespace SpeechRecognitionApp
{
class Program
{
static void Main(string[] args)
{
// Create an in-process speech recognizer for the en-US locale.
using (
SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(
new System.Globalization.CultureInfo("en-US")))
{
// Create and load a dictation grammar.
recognizer.LoadGrammar(new DictationGrammar());
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Recognized text: " + e.Result.Text);
}
}
}
Which should do exactly what I need.
So I created a new project in Visual Studio, copy-pasted the code, and run it.
There is no compilation error, but the constructor of SpeechRecognitionEngine, taking a non-null CultureInfo object as an argument, throws a System.NullReferenceException in System.Speech.dll.
To try to debug this, I made sure
new System.Globalization.CultureInfo("en-US")
returns a non-null object, and that this culture was installed.
I also updated my framework to 4.8, and run the project both as administrator and normal user.
I'm also using a x64 CPU platform, as the build fails with an ANY CPU platform.
It seems to me like I misconfigured something somewhere, as the code itself shouldn't be wrong.
Do you have any idea how to solve my problem ?
Thank you for your help.
EDIT : this may be linked to this problem, though I don't know if it's of any help :
NullReferenceException when starting recognizer with RecognizeAsync

I had the same error, but found the issue on my end. If you have the same issue as me, it may be that your project is using System.Speech.dll from an older framework, which I believe was causing my error. Or it may have been incompatibility between my target framework the dll?
I used NuGet to add System.Speech by Microsoft to my project. It added System.Speech (6.0.0) to the project references. This removed the null exception I was getting.
For information only, I initially added a reference to v3.0 framework, and that was causing my issue.

Related

Speech in using System.Speech.Synthesis is not recognized.

I'm trying to use the Speech Synthesis function for an universal app.
I looked at the Microsoft documentation and its says that the name space is System.Speech.Synthesis.
However, when i type System.Speech.Synthesis. It says that speech is not recognized.
What am i doing wrong?
Here is a working example:
using System.Speech.Synthesis;
static void Main(string[] args)
{
...
// Speech helper
SpeechSynthesizer reader = new SpeechSynthesizer();
const string msg = "Hello";
Console.WriteLine(msg);
reader.SpeakAsync(msg);
}
Also, make sure you referencing 'System.Speech':
You need to add the reference to your project. References, right click, add reference after finding it.
If that still doesn't work you need to right click in your .cs file on your variable and resolve to System.Speech.

Register functions without using them "Object reference not set to an instance of an object."

I have an IRC Bot in C# and I want to use Lua Scripting for the moment. On bot startup I want to register functions, and detect if a new file has been added then load it. I did the new file / reload scripts function already, but when I hit run I get "Object reference not set to an instance of an object." on a custom function I want users to be able to use.
Here's the current code:
public Lua lua;
public void RegisterFunctions() {
lua.RegisterFunction("print", this, typeof(DashLua).GetMethod("ConsoleOut"));
}
#region Custom Functions for Lua
public void ConsoleOut(String line) {
if (line == null) {
Console.WriteLine("Script error: print() can't be null.");
} else {
Console.WriteLine(line);
}
}
In the Main() of the bot I have just 2 lines currently:
DashLua dash = new DashLua();
dash.RegisterFunctions();
NLua (NuGet) only works on x64 builds which is why none of this actually worked. So if you're building a project and really need a Lua as a scripting language, you need to change your build to x64 in the Project's properties. That being said, you can get the Win32 version at their website which is a shame it's not on nuget.
EDIT: When you go and download the Win32 build, it's empty. I guess they have not built it and encourage you to build it yourself.

System.InvalidOperationException "The language for the grammar does not match the language of the speech recognizer"

I am currently recreating a tutorial project that I saw o recently but I am facing certain problems. Firstly I am using Windows 7 Home Premium and my OS is Turkish.
Because of this, System.Speech only works when I do text to speech, otherwise it throws an exception saying no speech recognizer installed. I checked this from the control panel as well and it says speech recognition is not available in this language. Since I do not have the ultimate version of windows I can't use language packs to change the language totally.
After a bit of research in stackoverflow I found out that installing micosoft speech platform and changing System.Speech to Microsoft.Speech works. So I followed the instructions on this web site(http://msdn.microsoft.com/en-us/library/hh362873.aspx) and installed the components including the en-US language pack.
I changed my code to reflect the changes and now I am getting different errors on different lines. Here is my code :
using System;
using System.Globalization;
using Microsoft.Speech.Recognition;
using Microsoft.Speech.Synthesis;
using System.Windows.Forms;
namespace SpeechRecognitionTest
{
public partial class Form1 : Form
{
private SpeechRecognitionEngine _speechRecognitionEngine = new SpeechRecognitionEngine(new CultureInfo("en-US"));
private SpeechSynthesizer _speechSynthesizer = new SpeechSynthesizer();
private PromptBuilder _promptBuilder = new PromptBuilder();
public Form1()
{
InitializeComponent();
}
private void btnSpeakText_Click(object sender, EventArgs e)
{
_promptBuilder.ClearContent();
_promptBuilder.AppendText(txtSpeech.Text);
_speechSynthesizer.SetOutputToDefaultAudioDevice();
_speechSynthesizer.Speak(_promptBuilder);
}
private void btnStart_Click(object sender, EventArgs e)
{
btnStart.Enabled = false;
btnEnd.Enabled = true;
var choicesList = new Choices();
choicesList.Add(new string[]{"hello","yes"});
var grammar = new Grammar(new GrammarBuilder(choicesList));
_speechRecognitionEngine.RequestRecognizerUpdate();
_speechRecognitionEngine.LoadGrammar(grammar);
_speechRecognitionEngine.SpeechRecognized += _speechRecognitionEngine_SpeechRecognized;
_speechRecognitionEngine.SetInputToDefaultAudioDevice();
_speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
}
void _speechRecognitionEngine_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
MessageBox.Show(e.Result.Text);
}
private void btnEnd_Click(object sender, EventArgs e)
{
btnStart.Enabled = true;
btnEnd.Enabled = false;
}
}
}
Firstly, when I try text to speech I get the error "An unhandled exception of type 'System.IO.FileNotFoundException' occurred in Microsoft.Speech.dll" on this line:
_speechSynthesizer.Speak(_promptBuilder);
Secondly when I try to do voice recognition I get the following exception:
An unhandled exception of type 'System.InvalidOperationException' occurred in Microsoft.Speech.dll
Additional information: The language for the grammar does not match the language of the speech recognizer.
On the line:
_speechRecognitionEngine.LoadGrammar(grammar);
I did search the internet and found mixed responses to this problem. Some could use System.Speech without a problem since they had english language installed, some non-english OS owners solved the problem through Microsoft.Speech. But there is no definitive answer on this. I am currently out of options and would really like if someone can explain what's wrong or even if I can run this code on my machine because of native OS language.
The exception in the synthesis engine is caused by the synthesis engine trying to find a default voice and failing. If you explicitly specify a voice (using SelectVoiceByHints or GetInstalledVoices(CultureInfo)), synthesis will succeed.
Second, GrammarBuilder objects have a Culture property (that defaults to the current UI culture). You will need to set it to the Culture of the recognizer before recognitions will work.

ApplicationView does not contain a definition for GetForCurrentView?

I'm currently following a tutorial from the MSDN which isn't very clear on somethings the issue that i am having is that the method that they are suggesting that i use is apparently not available to that class
Here is the link to the tutorial : http://msdn.microsoft.com/en-us/library/windows/apps/dn495655.aspx
Here is the code that i am using
In my App.Xaml.cs not my Main page i have an event handler
public App()
{
this.InitializeComponent();
Window.Current.SizeChanged += Current_SizeChanged;
this.Suspending += OnSuspending;
}
Underneath this the stub method
void Current_SizeChanged(object sender, Windows.UI.Core.WindowSizeChangedEventArgs e)
{
// Get the new view state but its not allowing me to use getforcurrent state
// almost like it doesn't exist
string CurrentViewState = ApplicationView.GetForCurrentView().Orientation.ToString();
// Trigger the Visual State Manager
VisualStateManager.GoToState(this, CurrentViewState, true);
}
If anyone else has followed this tutorial can they tell me what is going wrong ?
Have i put the code in the wrong page
Am i missing a library
I am following this microsoft tutorial word for word and it giving me the error which is the title of my post i have done research and i am using the latest version of visual studio and it's still not letting me use this method because it do not exist apparently
Try using Windows.UI.ViewManagement; at the top of the program.
The minimum supported version of this API is Windows 8.1. So you can't use it with the Windows 8 API.
MSDN reference

How to use System.Speech on non-English Win Vista Business

I´d like to try Speech recognition to controlling program. I wrote test program in C# and when I´m debugging this, an error occurred every time -
System.Runtime.InteropServices.COMException (0x80004005): Calling part of COM return error HRESULT E_FAIL.*
in System.Speech.Recognition.RecognizerBase.Initialize(SapiRecognizer recognizer, Boolean inproc)
in System.Speech.Recognition.SpeechRecognitionEngine.get_RecoBase()
in System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar(Grammar grammar)
It looks the error is caused by engine.LoadGrammar(new DictationGrammar());
On my notebook I installed CZECH OS Vista, and maybe this is the problem that speech recognition language is not the same as OS language.
Is there a way how to developing with system.speech in non english OS, or am I wrong in some step? There is no problem in language, I´d like use english for speech recognizing, but, I cannot get english Vista or MUI language pack.
Full code is below.
Thanks a lot!
using System;
using System.Windows;
using System.Speech.Recognition;
namespace rozpoznani_reci_WPF
{
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
SpeechRecognitionEngine engine = new SpeechRecognitionEngine();
try
{
engine.LoadGrammar(new DictationGrammar());
engine.SetInputToDefaultAudioDevice();
engine.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
}
catch(Exception e)
{
//MessageBox.Show(e.ToString());
textBox1.Text = e.ToString();
}
}
void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result != null)
{
textBox1.Text = e.Result.Text + " ";
}
}
}
}
According to the MSDN documentation on DictationGrammar, the argument-free constructor
Initializes a new instance of the DictationGrammar class for the default dictation grammar provided by Windows Desktop Speech Technology.
Is there a Czech language DicationGrammar class available on your machine? If not, you need to create one and use the other constructor DictationGrammar(String) and load one from a URI. You can also use GrammarBuilder to construct your own and load it instead using SpeechRecognizer.LoadGrammar().
You might also find this link useful; it's from 2008, but you did ask about Vista. :-)

Categories