I tried implementing some simple speech recognition WinForms program in C# like the one described here in Michael Levy answer:
good Speech recognition API
The problem i have is that any time i run the program Windows Speech Recognition opens and is also doing stuff based on what i am saying. Also when the program starts i have to say "start listening" for speech recognition to work.
My question is: How can i use speech recognition without having Windows Speech Recognition also act on what i am saying? I don't need Windows Speech Recognition UI to open at all and i need to be able to use recognition without having to say "start listening" before.
Thanks for your answers
Are you sure you are using an inproc recognizer for your application only. You do this by instantiating a SpeechRecognitionEngine() in your application. See SpeechRecognitionEngine Class. I suspect you are instantiating a shared recognizer - SpeechRecognizer Class
Related
I have a slider and a list of available voices that I want to allow the user to change on the fly during speech synthesis using the SpeechSynthesizer class. I am able to do all of that before speech starts using SpeechSynthesizer.Options.SpeakingRate and SpeechSynthesizer.Voice. However, I want the user to be able to change these settings while speech is occurring.
Changing the properties while speech synthesis is running async doesn't work. I've tried to create a new SpeechSynthesizer() and changing the voice, but that doesn't change the original synthesis as it is running either. I know this is possible, because it is done in Microsoft Edge. Any ideas?
I know this is possible, because it is done in Microsoft Edge.
First of all, I'm not sure how Microsoft Edge achieves this function. It is very likely that the edge is not using this API.
I checked the documentation and tested the official sample, I have not found the options that we can modify the Voice and SpeakingRate during the speech starts. So this function should be impossible for UWP.
The speech synthesizer contains SpeechSynthesizer.Rate Property.
{
// Initialize a new instance of the SpeechSynthesizer.
SpeechSynthesizer synth = new SpeechSynthesizer();
// Set a value for the speaking rate.
synth.Rate = -2;
// Configure the audio output.
synth.SetOutputToDefaultAudioDevice();
// Speak a text string synchronously.
synth.Speak("This example speaks a string with the speaking rate set to -2.");
}
Is it possible to create a grammar file for speech recognition and use SpeechRecognizer offline, like Cortana?
I only found Speech.Recognizer examples... it seems like Recognizer can use SpeechRecognitionEngine.loadGrammar(). I wonder if something similar is possible on a Windows Phone?
Is it possible to use SpeechRecognizer offline with a grammar File? Or is it possible to use Cortana's ability to recognize Speech To Text offline?
I want to make a program using Windows speech recognition and the respective .NET API and I was wondering, will the program automatically use the grammar and all the data that has been collected from the training of the recognition engine? Or will I have to force it? And if it has to be forced, how will that be ahcieved? The language to be used is C#.
The training is used so Windows Speech Recognition better understands what you say and gives more accurate results. This training data is stored in profiles. When you use Windows Speech Recognition in C#, you don't have to select a profile, Windows does this automatically.
I want to know how can I add commands to windows 7 shared speech recognition using SAPI 5.4 in C#. There is already a Microsoft application named WSRMacros, but I should do it programmatically by myself.
Any help and explanation would be extremely appreciated.
Just create an application that uses System.Speech.Recognizer.SpeechRecognizer. This loads the shared recognizer, which starts Windows Speech Recognition.
Create your grammars, and off you go!
I need to perform actions in my Desktop app when a user says certain things, for example, "Save Document" or "Save As" or "Save changes" will raise its corresponding event.
But I don't want to rely on, or even implement buttons (this is an app for me). So setting the AccessibleName or whatever is not good enough. I need more control.
Is there a way to "listen" for commands in a Windows WPF Desktop app? Then raise an event when that command has been spoken?
Since everyone is posting links to Microsoft Speech API, you might still be lost at how to use it.
So here is a tutorial for using Microsoft Speech API
Have you seen the Microsoft Speech API, which supports speech recognition?
You are looking for the Microsoft Speech API (This is a Get Started with Speech Recognition with a neat code example. Though it is for WinForms it should work for WPF too.). It allows you to create a grammar which can be recognized and input handled.
I'm looking into adding speech recognition to my fork of Hotspotizer Kinect-based app (http://github.com/birbilis/hotspotizer)
After some search I see you can't markup the actionable UI elements
with related speech commands in order to simulate user actions on them
as one would expect if Speech input was integrated in WPF. I'm
thinking of making a XAML markup extension to do that, unless someone
can point to pre-existing work on this that I could reuse...
the following links should be useful:
http://www.wpf-tutorial.com/audio-video/speech-recognition-making-wpf-listen/
http://www.c-sharpcorner.com/uploadfile/mahesh/programming-speech-in-wpf-speech-recognition/
http://blogs.msdn.com/b/rlucero/archive/2012/01/17/speech-recognition-exploring-grammar-based-recognition.aspx
https://msdn.microsoft.com/en-us/library/hh855387.aspx (make use of Kinect mic array audio input)
http://kin-educate.blogspot.gr/2012/06/speech-recognition-for-kinect-easy-way.html
https://channel9.msdn.com/Series/KinectQuickstart/Audio-Fundamentals
https://msdn.microsoft.com/en-us/library/hh855359.aspx?f=255&MSPPError=-2147217396#Software_Requirements
https://www.microsoft.com/en-us/download/details.aspx?id=27225
https://www.microsoft.com/en-us/download/details.aspx?id=27226
http://www.redmondpie.com/speech-recognition-in-a-c-wpf-application/
http://www.codeproject.com/Articles/55383/A-WPF-Voice-Commanded-Database-Management-Applicat
http://www.codeproject.com/Articles/483347/Speech-recognition-speech-to-text-text-to-speech-a
http://www.c-sharpcorner.com/uploadfile/nipuntomar/speech-to-text-in-wpf/
http://www.w3.org/TR/speech-grammar/
https://msdn.microsoft.com/en-us/library/hh361625(v=office.14).aspx
https://msdn.microsoft.com/en-us/library/hh323806.aspx
https://msdn.microsoft.com/en-us/library/system.speech.recognition.speechrecognitionengine.requestrecognizerupdate.aspx
http://blogs.msdn.com/b/rlucero/archive/2012/02/03/speech-recognition-using-multiple-grammars-to-improve-recognition.aspx