I am working on a c# application that uses the SAPI COM component.
In the following code snippe, how can I tell the recognizer to start recognition based on the grammar and the wav file? Thanks.
ISpRecognizer sre = new SpInprocRecognizerClass();
ISpRecoContext context = null;
sre.CreateRecoContext(out context);
ISpRecoGrammar grammar = null;
context.CreateGrammar(1, out grammar);
grammar.LoadCmdFromFile(#"c:\grammar", SPLOADOPTIONS.SPLO_STATIC);
grammar.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED);
SpFileStreamClass fs = new SpFileStreamClass();
fs.Open(#"c:\1.wav", SpeechStreamFileMode.SSFMOpenForRead, false);
((SpInprocRecognizerClass)sre).AudioInputStream = fs;
You're almost there.
sre.SetRecoState(SPRECOSTATE.SPRST_ACTIVE);
should do the trick.
Related
I am using c# system.speech , and i have limited number of sentences that i wants to recognize. Here is code
SpeechRecognitionEngine recognizer = new SpeechRecognitionEngine();
String[] Sentences = File.ReadAllLines(samplePath);
Choices sentences = new Choices();
sentences.Add(Sentences);
GrammarBuilder gBuilder = new GrammarBuilder(sentences);
Grammar g = new Grammar(gBuilder);
g.Enabled = true;
recognizer.LoadGrammar(g);
try
{
recognizer.SetInputToWaveFile(filePath);
RecognitionResult result = recognizer.Recognize();
String ret = result.Text;
recognizer.Dispose();
return ret;
}
catch (InvalidOperationException exception) { }
return "";
This code throws exception when I give it some wav file and reason of exception is it can't find match in sample sentences. Can I force it so it must select on sentence?
You are getting NullReferenceException because the format of your .wav file's format is different than how System.Speech.Recognition.SpeechRecognitionEngine is trying to analyse .wav files by default when using the SetInputToWaveFile method.
In order to change the read format you should use the SetInputToAudioStream method instead:
using (FileStream stream = new FileStream("C:\\3.wav", FileMode.Open))
{
recognizer.SetInputToAudioStream(stream, new SpeechAudioFormatInfo(5000, AudioBitsPerSample.Sixteen, AudioChannel.Stereo));
RecognitionResult result = recognizer.Recognize();
string ret = result.Text;
}
This way it reads your .wav file as a stereo file, at 16bps and with 5000 samples per second as your .wav file is really encoded.
Note: this solved the problem for me ON YOUR FILE
I'm trying to write code to read aloud an incoming Toast (this was trivial in WP8.1)
I have this so far
Using MediaElement doesn't seem to work (code runs but no audio) either on the phone or in the
emulator
Using BackgroundMediaPlayer works in the emulator but not on the phone
I've tried both from the UI thread (MediaElement only works on the UI thread) and BackgroundMediaPlayer from the thread that handles the incoming toast
var mediaElement = new MediaElement();
using (var tts = new SpeechSynthesizer())
{
using (var ttsStream = await tts.SynthesizeSsmlToStreamAsync(ssml))
{
//BackgroundMediaPlayer.Current.SetStreamSource(ttsStream);
mediaElement.SetSource(ttsStream, ttsStream.ContentType);
mediaElement.Play();
}
}
I'm obviously missing something simple here but I'm out of ideas how to make this work.
The SSML is correct, I think it's probably something to do with scoping and threads
var synth = new SpeechSynthesizer();
var voice = SpeechSynthesizer.DefaultVoice;
var newuserText = TheMessage
var stream = await synth.SynthesizeTextToStreamAsync(newuserText);
var mediaElement = new MediaElement();
mediaElement.SetSource(stream, stream.ContentType);
mediaElement.Play();
As the XNA SoundEffect is no longer available in the Windows Runtime API (for developing Universal App), I need something similar to play multiple audio streams at the same time.
Requirements:
Play the same audio file multiple times, simultaneously.
Previous Silverlight implementation with SoundEffect:
// Play sound 10 times, sound can be played together.
// i.e. First sound continues playing while second sound starts playing.
for(int i=0; i++; i < 10)
{
Stream stream = TitleContainer.OpenStream("sounds/Ding.wav");
SoundEffect effect = SoundEffect.FromStream(stream);
FrameworkDispatcher.Update();
effect.Play();
// Wait a while before playing again.
Thread.Sleep(500);
}
SoundEffect supports multiple (up to 16 I think) SoundEffectInstance being played simultaneously.
The standard MediaElement API only supports 1 audio stream for Windows Phone 8.1.
I bumped into this: https://github.com/rajenki/audiohelper which uses the XAudio2 API but it doesn't seem to support simultaneous audio either.
Solved. I used SharpDX. Huge thanks to the author here: http://www.hoekstraonline.net/2013/01/13/how-to-play-a-wav-sound-file-with-directx-in-c-for-windows-8/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-play-a-wav-sound-file-with-directx-in-c-for-windows-8
Here is the code to the solution:
Initialization:
xAudio = new XAudio2();
var masteringVoice = new MasteringVoice(xAudio);
var nativeFileStream = new NativeFileStream("Assets/Ding.wav", NativeFileMode.Open, NativeFileAccess.Read, NativeFileShare.Read);
stream = new SoundStream(nativeFileStream);
waveFormat = stream.Format;
buffer = new AudioBuffer
{
Stream = stream.ToDataStream(),
AudioBytes = (int)stream.Length,
Flags = BufferFlags.EndOfStream
};
Event handler:
var sourceVoice = new SourceVoice(xAudio, waveFormat, true);
sourceVoice.SubmitSourceBuffer(buffer, stream.DecodedPacketsInfo);
sourceVoice.Start();
The officially provided code by SharpDX's sample does not use NativeFileStream, it is required to make it work.
I am using MediaCapture class to record voice in aac-lc/m4a format. I am trying to play same audio data on MediaElement. It is not playing the m4a audio. However if i record the voice in 'WAV' format, MediaElement is able to play the audio without any issues. I tried with all possible MIME Types for aac-lc/m4a audio.
Here is the player code:
var inMemoryRas = new InMemoryRandomAccessStream();
var writeStream = inMemoryRas.AsStreamForWrite();
await writeStream.WriteAsync(audioData, 0, audioData.Length);
await writeStream.FlushAsync();
inMemoryRas.Seek(0);
mMediaPlayer_.AudioCategory = AudioCategory.ForegroundOnlyMedia;
String mimeType_ = "";
mMediaPlayer_.SetSource(inMemoryRas, mimeType_);
mMediaPlayer_.AutoPlay = true;
mMediaPlayer_.Play();
I tried with following set of mime types, still no help.
audio/mpeg, audio/mp4, audio/aac, video/mp4, audio/m4a
Again if i record the audio in wav format it plays without any issues.
Here is the recorder code:
MediaEncodingProfile recordProfile = null;
recordProfile = MediaEncodingProfile.CreateM4a(Windows.Media.MediaProperties.AudioEncodingQuality.Low);
mRecordingStream_ = new InMemoryRandomAccessStream();
await m_mediaCaptureMgr.StartRecordToStreamAsync(recordProfile, mRecordingStream_);
I appreciate any help on this, Thanks in advance.
I need to turn a text into speech and then save it as wav file.
The following C# code uses the System.Speech namespace in the .Net framework.
It is necessary to reference the namespace before using it, because it is not automatically referenced by Visual Studio.
SpeechSynthesizer ss = new SpeechSynthesizer();
ss.Volume = 100;
ss.SelectVoiceByHints(VoiceGender.Female, VoiceAge.Adult);
ss.SetOutputToWaveFile(#"C:\MyAudioFile.wav");
ss.Speak("Hello World");
I hope this is relevant and helpful.
This is from a few moments' play, so caveat emptor. Worked well for me. I did notice that SpFileStream (which doesn't implement IDisposable, thus the try/finally) prefers absolute paths to relative. C#.
SpFileStream fs = null;
try
{
SpVoice voice = new SpVoice();
fs = new SpFileStream();
fs.Open(#"c:\hello.wav", SpeechStreamFileMode.SSFMCreateForWrite, false);
voice.AudioOutputStream = fs;
voice.Speak("Hello world.", SpeechVoiceSpeakFlags.SVSFDefault);
}
finally
{
if (fs != null)
{
fs.Close();
}
}
And as I've found for how to change output format, we code something like this :
SpeechAudioFormatInfo info = new SpeechAudioFormatInfo(6, AudioBitsPerSample.Sixteen, AudioChannel.Mono);
//Same code comes here
ss.SetOutputToWaveFile(#"C:\MyAudioFile.wav",info);
That's pretty easy and comprehensible.
Cool .net