Windows Speech Recognition - Using native commands/dialogs in API - c#

I have just started using the Windows Speech Recognition (in Windows 8.1) and the System.Speech API (in C#) which seems to tie into it's underlying engine.
However, the API doesn't seem to have access to the semantic commands and dialogs such as "Correct [word]" or "Correct that" or "Spell [word]" which pull up the Alternates Dialog and Spelling Dialog (which are attractive, easy to use screens that add a lot of power/value)
Is there any way to either get access to these commands/dialogs in the API OR to tie my application into the external Windows Speech Recognition application to support these advanced functions?

Related

Windows Speech Recognition in C#

I want to make a program using Windows speech recognition and the respective .NET API and I was wondering, will the program automatically use the grammar and all the data that has been collected from the training of the recognition engine? Or will I have to force it? And if it has to be forced, how will that be ahcieved? The language to be used is C#.
The training is used so Windows Speech Recognition better understands what you say and gives more accurate results. This training data is stored in profiles. When you use Windows Speech Recognition in C#, you don't have to select a profile, Windows does this automatically.

Adding commands to windows speech recognition using SAPI 5.4 with C#

I want to know how can I add commands to windows 7 shared speech recognition using SAPI 5.4 in C#. There is already a Microsoft application named WSRMacros, but I should do it programmatically by myself.
Any help and explanation would be extremely appreciated.
Just create an application that uses System.Speech.Recognizer.SpeechRecognizer. This loads the shared recognizer, which starts Windows Speech Recognition.
Create your grammars, and off you go!

How to implement Speech Recognition in C# ASP.NET?

I want to use speech recognition in C# ASP.NET where
1) I want to allow the user to speak the hotword
2) Validate the hotword and based on the hotword, the user should be given with the appropriate portal.
I came to know that the "System.Speech" namespace is not available for web applications and this could be done using Silverlight. I'm new to Silverlight technologies, so can anybody help me out or suggest me with some alternate ways to achieve the goal.
You can check this link out to start out with:
MSDN - Get Started with Speech Recognition
And this, too, for Silverlight speech API:
Having fun with Silverlight 4 Beta and the Speech APIs

Voice recognition in Windows

I need to perform actions in my Desktop app when a user says certain things, for example, "Save Document" or "Save As" or "Save changes" will raise its corresponding event.
But I don't want to rely on, or even implement buttons (this is an app for me). So setting the AccessibleName or whatever is not good enough. I need more control.
Is there a way to "listen" for commands in a Windows WPF Desktop app? Then raise an event when that command has been spoken?
Since everyone is posting links to Microsoft Speech API, you might still be lost at how to use it.
So here is a tutorial for using Microsoft Speech API
Have you seen the Microsoft Speech API, which supports speech recognition?
You are looking for the Microsoft Speech API (This is a Get Started with Speech Recognition with a neat code example. Though it is for WinForms it should work for WPF too.). It allows you to create a grammar which can be recognized and input handled.
I'm looking into adding speech recognition to my fork of Hotspotizer Kinect-based app (http://github.com/birbilis/hotspotizer)
After some search I see you can't markup the actionable UI elements
with related speech commands in order to simulate user actions on them
as one would expect if Speech input was integrated in WPF. I'm
thinking of making a XAML markup extension to do that, unless someone
can point to pre-existing work on this that I could reuse...
the following links should be useful:
http://www.wpf-tutorial.com/audio-video/speech-recognition-making-wpf-listen/
http://www.c-sharpcorner.com/uploadfile/mahesh/programming-speech-in-wpf-speech-recognition/
http://blogs.msdn.com/b/rlucero/archive/2012/01/17/speech-recognition-exploring-grammar-based-recognition.aspx
https://msdn.microsoft.com/en-us/library/hh855387.aspx (make use of Kinect mic array audio input)
http://kin-educate.blogspot.gr/2012/06/speech-recognition-for-kinect-easy-way.html
https://channel9.msdn.com/Series/KinectQuickstart/Audio-Fundamentals
https://msdn.microsoft.com/en-us/library/hh855359.aspx?f=255&MSPPError=-2147217396#Software_Requirements
https://www.microsoft.com/en-us/download/details.aspx?id=27225
https://www.microsoft.com/en-us/download/details.aspx?id=27226
http://www.redmondpie.com/speech-recognition-in-a-c-wpf-application/
http://www.codeproject.com/Articles/55383/A-WPF-Voice-Commanded-Database-Management-Applicat
http://www.codeproject.com/Articles/483347/Speech-recognition-speech-to-text-text-to-speech-a
http://www.c-sharpcorner.com/uploadfile/nipuntomar/speech-to-text-in-wpf/
http://www.w3.org/TR/speech-grammar/
https://msdn.microsoft.com/en-us/library/hh361625(v=office.14).aspx
https://msdn.microsoft.com/en-us/library/hh323806.aspx
https://msdn.microsoft.com/en-us/library/system.speech.recognition.speechrecognitionengine.requestrecognizerupdate.aspx
http://blogs.msdn.com/b/rlucero/archive/2012/02/03/speech-recognition-using-multiple-grammars-to-improve-recognition.aspx

Text 2 Speech Application to Read User Activities in Windows OS

I am working on an application using c# that will implement normal Text to Speech functionality using the Speech Synthesizer. One other functionality I want to implement is something similar to the Narrator Accessibility tool that comes with the Windows OS. The application should be able to go to background and read out information on what ever the mouse points at in any application.
Does any on know any library I can call or implement, that will make this possible? I am using Visual Studio 2010 Professional.
Thanks
Take a look at the System.Speech namespace - it should contain everything you will need.

Categories