Voice recognition in Windows - c#

I need to perform actions in my Desktop app when a user says certain things, for example, "Save Document" or "Save As" or "Save changes" will raise its corresponding event.
But I don't want to rely on, or even implement buttons (this is an app for me). So setting the AccessibleName or whatever is not good enough. I need more control.
Is there a way to "listen" for commands in a Windows WPF Desktop app? Then raise an event when that command has been spoken?

Since everyone is posting links to Microsoft Speech API, you might still be lost at how to use it.
So here is a tutorial for using Microsoft Speech API

Have you seen the Microsoft Speech API, which supports speech recognition?

You are looking for the Microsoft Speech API (This is a Get Started with Speech Recognition with a neat code example. Though it is for WinForms it should work for WPF too.). It allows you to create a grammar which can be recognized and input handled.

I'm looking into adding speech recognition to my fork of Hotspotizer Kinect-based app (http://github.com/birbilis/hotspotizer)
After some search I see you can't markup the actionable UI elements
with related speech commands in order to simulate user actions on them
as one would expect if Speech input was integrated in WPF. I'm
thinking of making a XAML markup extension to do that, unless someone
can point to pre-existing work on this that I could reuse...
the following links should be useful:
http://www.wpf-tutorial.com/audio-video/speech-recognition-making-wpf-listen/
http://www.c-sharpcorner.com/uploadfile/mahesh/programming-speech-in-wpf-speech-recognition/
http://blogs.msdn.com/b/rlucero/archive/2012/01/17/speech-recognition-exploring-grammar-based-recognition.aspx
https://msdn.microsoft.com/en-us/library/hh855387.aspx (make use of Kinect mic array audio input)
http://kin-educate.blogspot.gr/2012/06/speech-recognition-for-kinect-easy-way.html
https://channel9.msdn.com/Series/KinectQuickstart/Audio-Fundamentals
https://msdn.microsoft.com/en-us/library/hh855359.aspx?f=255&MSPPError=-2147217396#Software_Requirements
https://www.microsoft.com/en-us/download/details.aspx?id=27225
https://www.microsoft.com/en-us/download/details.aspx?id=27226
http://www.redmondpie.com/speech-recognition-in-a-c-wpf-application/
http://www.codeproject.com/Articles/55383/A-WPF-Voice-Commanded-Database-Management-Applicat
http://www.codeproject.com/Articles/483347/Speech-recognition-speech-to-text-text-to-speech-a
http://www.c-sharpcorner.com/uploadfile/nipuntomar/speech-to-text-in-wpf/
http://www.w3.org/TR/speech-grammar/
https://msdn.microsoft.com/en-us/library/hh361625(v=office.14).aspx
https://msdn.microsoft.com/en-us/library/hh323806.aspx
https://msdn.microsoft.com/en-us/library/system.speech.recognition.speechrecognitionengine.requestrecognizerupdate.aspx
http://blogs.msdn.com/b/rlucero/archive/2012/02/03/speech-recognition-using-multiple-grammars-to-improve-recognition.aspx

Related

Call Functions on Voice Commands In Android Unity

I am making a flashlight application in unity C#. The application is almost complete I just want to add this voice command feature in this like when I say "ON" the flashlight should turn on and when I say " OFF " the flashlight should turn off. The application is for Android devices. I saw several tutorials about calling functions on voice commands but that all were only for windows platform please help me if you know something about doing this in android thanks
I have not used any Speech Recognition tools but its not very difficult to implement if you can create a java plugin & use it to call native function. Anyways I have found few of the SDK:
You can check out the pocket sphinx demos for speech recognition.
https://github.com/cmusphinx/pocketsphinx
https://github.com/cmusphinx/pocketsphinx-android-demo
Here is a repo I found which uses AndroidSpeechRecognition.
https://github.com/gsssrao/UnityAndroidSpeechRecognition
Programmer has given a nice explaination of voice recognition implementation natively:
How to add Speech Recognition to Unity project?
Then there is WatsonSDK for unity but it seems to be via cloud but you can check this one out:
https://github.com/watson-developer-cloud/unity-sdk
And if you dont mind paying for this plugin called Android SpeakNow you can grab it from asset store:
https://assetstore.unity.com/packages/tools/integration/android-speaknow-16781
These are some cloud based packages from asset store, I really doubt you might need this one to implement but in any case this is for someone who may require them at some point of time:
https://assetstore.unity.com/packages/add-ons/machinelearning/google-cloud-speech-recognition-vr-ar-desktop-desktop-72625
https://assetstore.unity.com/packages/tools/integration/yandex-cloud-speech-recognition-vr-ar-mobile-desktop-75155
And finally DictationRecognizer; by default this one is available only for windows 10 as of Unity 2018.2. So this is out of question. My best bet would be cmusphinx or implementing natively which I believe would be more suitable for your needs. Check them out. Try to implement one or two and let us know if you were successful or not.
If anyone can add more links to SDK for voice recognition feel free to add. This would be really great.
If you just need only ON and OFF voice inputs you can use the following code
Speech to text in unity
If you need exact speech recognition then refer the following code
Speech recognition in unity

Windows Speech Recognition - Using native commands/dialogs in API

I have just started using the Windows Speech Recognition (in Windows 8.1) and the System.Speech API (in C#) which seems to tie into it's underlying engine.
However, the API doesn't seem to have access to the semantic commands and dialogs such as "Correct [word]" or "Correct that" or "Spell [word]" which pull up the Alternates Dialog and Spelling Dialog (which are attractive, easy to use screens that add a lot of power/value)
Is there any way to either get access to these commands/dialogs in the API OR to tie my application into the external Windows Speech Recognition application to support these advanced functions?

WPF UI in WinForms

Is there some way that I can customize WinForm just like I can do it in WPF. Some code that will allow me that trought some sort of grid I customize it?
Buttons, tree view functions and similar things that will make customization easier unlike in WinForms where everything is so dull.
WPF doesn't support speech lib so that's why I'm asking this, otherwise I'll go for WPF.
On this website, it is generally a good idea to actually ask for what you want, instead of what you have asked for. It seems to me as if what you really want is to use some sort of text to speech functionality with WPF... so why didn't you just ask 'can I use text to speech with WPF?' or something similar?
To answer that question, yes, you can do that with WPF. Please take a look at the following links:
WPF Text To Speech UI
How to use the Speech Synthesizer in WPF
Using Speech Synthesis in a WPF Application
Speech Basics-WPF C# Sample
Plenty more are available online.

Text 2 Speech Application to Read User Activities in Windows OS

I am working on an application using c# that will implement normal Text to Speech functionality using the Speech Synthesizer. One other functionality I want to implement is something similar to the Narrator Accessibility tool that comes with the Windows OS. The application should be able to go to background and read out information on what ever the mouse points at in any application.
Does any on know any library I can call or implement, that will make this possible? I am using Visual Studio 2010 Professional.
Thanks
Take a look at the System.Speech namespace - it should contain everything you will need.

How to control Microsoft Speech Recognition app?

I want to know if it's possible to control "Microsoft Speech Recognition" using c#.
(source: yfrog.com)
Is it possible, for instance, to simulate the click on "On: Listen to everything I say" programmatically using c# or python?
JRobert had the right idea.
If you were using C++, then you would call ISpRecognizer::SetRecoState(SPRST_ACTIVE), and then, if you're running on Windows 7, QI the ISpRecognizer for ISpRecognizer3 and call ISpRecognizer3::SetActiveCategory(NULL) to force the recognizer into the ON state.
But, since you're using C#, you should use System.Speech.Recognition.SpeechRecognizer and set the State property to Listening. (Note that this will not, as far as I know, switch from Sleep to On.)
Here's Microsoft's Speech API documentation, and an
example in Python.

Categories