I'm trying to create a simple audio Looper to record my own songs. For those of you who don't know what an audio Looper is, you first record a layer of a song(let's say a piece of piano) then you start Looping that piece to add a piece of guitar on top of that layered. Since I already know how to add both together that's not the issue. But I would like to safe my layer which I add every time on top of that layer, so I would have to check if the audio file I loop is restarting or not. To safe a 2nd file with the layer to add. Mix it together afterwards(since you can't write 2 different streams in 1 file) and loop that.
For the Looping I am using the SoudPlayer Class using System.Media.
Player = new SoundPlayer(path);
Player.PlayLooping();
int deviceNumber = sourceList.SelectedItems[0].Index;
sourceStream = new NAudio.Wave.WaveIn();
sourceStream.DeviceNumber = deviceNumber;
sourceStream.WaveFormat = new NAudio.Wave.WaveFormat(44100, NAudio.Wave.WaveIn.GetCapabilities(deviceNumber).Channels);
sourceStream.DataAvailable += new EventHandler<NAudio.Wave.WaveInEventArgs>(sourceStream_DataAvailable);
waveWriter = new NAudio.Wave.WaveFileWriter(open.FileName, sourceStream.WaveFormat);
sourceStream.StartRecording();
The recording should stop everytime the audio file is at the end.
Thanks in advance!
You need take a look on the NAudio documentation.
One safety feature I often add when recording WAV is to limit the size of a WAV file. They grow quickly and can't be over 4GB in any case. Here I'll request that recording stops after 30 seconds:
waveIn.DataAvailable += (s, a) =>
{
writer.Write(a.Buffer, 0, a.BytesRecorded);
if (writer.Position > waveIn.WaveFormat.AverageBytesPerSecond * 30)
{
waveIn.StopRecording();
}
};
Related
I'm trying to use NAudio to record some sound in C# using WasapiLoopbackCapture and WaveFileWriter.
The problem is that after the recording is finished, the "size" field in the WAV/RIFF header is set to 0, rendering the file unplayable.
I'm using the following code:
WasapiLoopbackCapture CaptureInstance = null;
WaveFileWriter RecordedAudioWriter = null;
void StartSoundRecord()
{
string outputFilePath = #"C:\RecordedSound.wav";
// Redefine the capturer instance with a new instance of the LoopbackCapture class
CaptureInstance = new WasapiLoopbackCapture();
// Redefine the audio writer instance with the given configuration
RecordedAudioWriter = new WaveFileWriter(outputFilePath, CaptureInstance.WaveFormat);
// When the capturer receives audio, start writing the buffer into the mentioned file
CaptureInstance.DataAvailable += (s, a) =>
{
// Write buffer into the file of the writer instance
RecordedAudioWriter.Write(a.Buffer, 0, a.BytesRecorded);
};
// When the Capturer Stops, dispose instances of the capturer and writer
CaptureInstance.RecordingStopped += (s, a) =>
{
RecordedAudioWriter.Dispose();
RecordedAudioWriter = null;
CaptureInstance.Dispose();
};
// Start audio recording !
CaptureInstance.StartRecording();
}
void StopSoundRecord()
{
if(CaptureInstance != null)
{
CaptureInstance.StopRecording();
}
}
(Borrowed from: https://ourcodeworld.com/articles/read/702/how-to-record-the-audio-from-the-sound-card-system-audio-with-c-using-naudio-in-winforms )
Which I'm testing with simply:
StartSoundRecord();
Thread.Sleep(10000);
StopSoundRecord();
What am I missing, why isn't WaveFileWriter writing the size field? I have tried also calling the Flush() and Close() methods before disposing. But it makes no difference.
Sure, I could write a method to find out the size of the file and manually writing it to the final file, but that seems unnecessary.
Found the solution.
Calling RecordedAudioWriter.Flush() after every Write made it work just fine.
Don't know if doing so might be inefficient (as I assume flush is blocking until data is written to disk), but for my application that is not an issue.
My code creates a power point presentation and three audio files.
I want to know the length of those audio files after the presentation is created so I use:
double duration = 0;
WindowsMediaPlayer wmp = new WindowsMediaPlayer();
IWMPMedia mediainfo = wmp.newMedia(file);
duration = mediainfo.duration;
wmp.close();
return duration;
To create the audio files I use
public void CreateAudio(string text)
{
y++;
synth = new SpeechSynthesizer();
AudioStream = new FileStream(folder + #"\audio\a" + #y.ToString() + #".wav", FileMode.OpenOrCreate);
//synth.SpeakCompleted += new EventHandler<SpeakCompletedEventArgs>(synth)
synth.SetOutputToWaveStream(AudioStream);
synth.SpeakAsync(text);
}
private void synth_SpeakCompleted(object sender, SpeakCompletedEventArgs e)
{
synth.Dispose();
}
The problem is that after the presentation is created, only the shortest audio file returns a length, the rest return 0.
If I check manually, I see 3 audio files with a valid length property, but the program doesn't read that for some reason.
My attempt at disposing the synth didn't change a thing, nor did using syth.speak so I must have done something terribly wrong at using and managing my objects and memory but I don't know what or where.
If I use the code to check the length of the audio files created in a different code for example, it works perfectly, just when I create them and want to check their length something goes wrong.
What I'm trying to do is convert a WaveIn microphone input to a different WaveFormat, and add that to a MixingSampleProvider.
WaveIn waveIn = new WaveIn(this.Handle);
waveIn.BufferMilliseconds = 25;
waveIn.DataAvailable += waveIn_DataAvailable;
// create wave provider
WaveProvider waveProvider = new BufferedWaveProvider(waveIn.WaveFormat);
WaveFormat commonWaveFormat = WaveFormat.CreateIeeeFloatWaveFormat(44100, 2);
MixingSampleProvider msp = new MixingSampleProvider(commonWaveFormat);
WaveFormatConversionStream wfcs = new WaveFormatConversionStream(commonWaveFormat,new WaveProviderToWaveStream(waveProvider));
msp.AddMixerInput(wfcs);
// create wave output to speakers
waveOut = new WaveOut();
waveOut.DesiredLatency = 100;
waveOut.Init(msp);
where WaveProviderToWaveStream is a class from this answer.
However, that gives me the following exception at WaveFormatConversionStream.
NAudio.MmException:AcmNotPossible calling acmStreamOpen
I tried
msp.addMixerInput(MediaFoundationResampler(waveProvider, commonWaveFormat).toSampleProvider());
which worked but produced a too large delay between talking into the microphone and hearing the output.
It will be much easier for you simply to specify the format you want to record in on the to WaveIn object itself. (Just set the WaveFormat property before starting to record. It should still be 16 bit PCM, but specify the sample rate and channel count you want)
Then you can turn your BufferedWaveProvider into an ISampleProvider by using the ToSampleProvider extension method which will let you add it to the MixingSampleProvider
As the XNA SoundEffect is no longer available in the Windows Runtime API (for developing Universal App), I need something similar to play multiple audio streams at the same time.
Requirements:
Play the same audio file multiple times, simultaneously.
Previous Silverlight implementation with SoundEffect:
// Play sound 10 times, sound can be played together.
// i.e. First sound continues playing while second sound starts playing.
for(int i=0; i++; i < 10)
{
Stream stream = TitleContainer.OpenStream("sounds/Ding.wav");
SoundEffect effect = SoundEffect.FromStream(stream);
FrameworkDispatcher.Update();
effect.Play();
// Wait a while before playing again.
Thread.Sleep(500);
}
SoundEffect supports multiple (up to 16 I think) SoundEffectInstance being played simultaneously.
The standard MediaElement API only supports 1 audio stream for Windows Phone 8.1.
I bumped into this: https://github.com/rajenki/audiohelper which uses the XAudio2 API but it doesn't seem to support simultaneous audio either.
Solved. I used SharpDX. Huge thanks to the author here: http://www.hoekstraonline.net/2013/01/13/how-to-play-a-wav-sound-file-with-directx-in-c-for-windows-8/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-play-a-wav-sound-file-with-directx-in-c-for-windows-8
Here is the code to the solution:
Initialization:
xAudio = new XAudio2();
var masteringVoice = new MasteringVoice(xAudio);
var nativeFileStream = new NativeFileStream("Assets/Ding.wav", NativeFileMode.Open, NativeFileAccess.Read, NativeFileShare.Read);
stream = new SoundStream(nativeFileStream);
waveFormat = stream.Format;
buffer = new AudioBuffer
{
Stream = stream.ToDataStream(),
AudioBytes = (int)stream.Length,
Flags = BufferFlags.EndOfStream
};
Event handler:
var sourceVoice = new SourceVoice(xAudio, waveFormat, true);
sourceVoice.SubmitSourceBuffer(buffer, stream.DecodedPacketsInfo);
sourceVoice.Start();
The officially provided code by SharpDX's sample does not use NativeFileStream, it is required to make it work.
I might be wrong on this, but it appears that my WAV file in NAudio is playing asynchronously, and if I call it from a library, I can hear the WAV start, but it exits (and stops playing) almost immediately.
If I run it from a Form (which stays open) it is fine. But if I try to run it from code as a background thread, I only hear a small fragment in the beginning. It SEEMS like the method is exiting before the sound is finished playing.
I am using the Text To Speech Synthesizer to create the WAV on a Memory stream:
SpeechSynthesizer m_speechSynth = new SpeechSynthesizer();
m_speechSynth.Volume = 100;
m_speechSynth.Rate = 2;
MemoryStream waveStream = new MemoryStream();
m_speechSynth.SetOutputToWaveStream(waveStream);
m_speechSynth.Speak(txtToRead);
m_speechSynth.SetOutputToNull();
waveStream.Position = 0; // reset counter to start
That works fine. I then create a class that I wrapped around NAudio:
VoiceThroughNetAudio netAudio = new VoiceThroughNetAudio(waveStream, "WAV");
I'm trying to keep the code sample here easy to follow... but what this class does is simply initializes things, and gives me a higher level of control to do the main things on an NAudio player... Inside the VoiceThroughNetAudio constructor,
First I create a reader Stream from the MemStream:
readerStream = new WaveFileReader(memStream);
readerStream = WaveFormatConversionStream.CreatePcmStream(readerStream);
readerStream = new BlockAlignReductionStream(readerStream);
And attach a Volume Control (so I can amplify it):
_volumeStream = new WaveChannel32(Reader);
This all works fine when I call it from my WinForm Code:
MemoryStream waveStream = new MemoryStream();
ReadStringAmplified(readTextField.Text, waveStream); // TTS to waveStream
VoiceThroughNetAudio PlaySound = new VoiceThroughNetAudio (waveStream, "WAV");
PlaySound.Volume = 9;
PlaySound.Play();
But when I move this code into the Voice class instead, the sound STARTS to play, and then stops (almost immediately). Like it is starting an event and then exiting before the sound finishes? If so, is there a good way of making the method wait until the sound is finished? (I tried putting a while loop in and check the state, but this just ends up going into the while loop for an infinite period (commented out here...)
Here is how the code looks when I move it to the Voice Class. If I instantiate it from the form, and run it all there, and I get the truncated sound:
SpeechSynthesizer m_speechSynth = new SpeechSynthesizer();
m_speechSynth.Volume = 100;
m_speechSynth.Rate = 2;
MemoryStream waveStream = new MemoryStream();
m_speechSynth.SetOutputToWaveStream(waveStream);
m_speechSynth.Speak(txtToRead);
m_speechSynth.SetOutputToNull();
waveStream.Position = 0; // reset counter to start
VoiceThroughNetAudio netAudio = new VoiceThroughNetAudio(waveStream, "WAV");
netAudio.Volume = 9;
TimeSpan ts = netAudio.TotalDuration;
string len = ts.Milliseconds.ToString();
netAudio.Play();
//VoiceThroughNetAudio.PlayBackState chkState = netAudio.State;
//while (netAudio.State != VoiceThroughNetAudio.PlayBackState.Stopped)
//{
// chkState = netAudio.State;
//}
It seems like I am missing something simple, but I can't seem to get this to work in the Voice Class, only from the Form.
Most likely explanations for your problem are:
You are trying to play from a MemoryStream that is still being written to
Your output device has gone out of scope and been garbage collected while it is still playing
Try subscribing to the PlaybackStopped event of your IWavePlayer to see when and why it is firing (it can contain an exception if there is a problem).
Also, I very much doubt you need a WaveFormatConversionStream and a BlockAlignReductionStream, so simplify your code by removing those.