C# Frequency Retrieval - c#

What I need to do is calculate the frequency of the microphone input. I'm using IWaveProvider for this and its implemented Read(). The buffer always has a size of 8820 elements and something seems to be going wrong with the conversion from byte array to float array as well (the FloatBuffer property part).
Here are some of the important bits...
This is where I start my recording:
private void InitializeSoundRecording()
{
WaveIn waveIn = new WaveIn();
waveIn.DeviceNumber = 0;
waveIn.DataAvailable += (s, e) => this.waveIn_DataAvailable(s, e);
waveIn.RecordingStopped += (s, e) => this.waveIn_RecordingStopped(s, e);
waveIn.WaveFormat = new WaveFormat(44100, 1);
waveIn.StartRecording();
}
When the DataAvailable event handler is called, the following is executed:
private void waveIn_DataAvailable(object sender, WaveInEventArgs e)
{
WaveBuffer wb = new WaveBuffer(e.Buffer.Length);
IWaveProvider iWaveProvider = new PitchDetector(new WaveInProvider(sender as WaveIn), new WaveBuffer(e.Buffer));
iWaveProvider.Read(wb, 0, e.Buffer.Length);
PitchDetector pd = iWaveProvider as PitchDetector;
this.ShowPitch(pd.Pitch);
}
And lastly, this is the "actual" important bit:
private const int FLOAT_BUFFER_SIZE = 8820;
private IWaveProvider source;
private WaveBuffer waveBuffer;
private int sampleRate;
private float[] fftBuffer;
private float[] prevBuffer;
public float Pitch { get; private set; }
public WaveFormat WaveFormat { get { return this.source.WaveFormat; } }
internal PitchDetector(IWaveProvider waveProvider, WaveBuffer waveBuffer = null)
{
this.source = waveProvider;
this.sampleRate = waveProvider.WaveFormat.SampleRate;
this.waveBuffer = waveBuffer;
}
/// <summary>
/// UNSAFE METHOD!
/// </summary>
/// <param name="input"></param>
/// <returns></returns>
private unsafe float[] ByteArrayToFloatArray(byte[] input)
{
float[] fb = new float[FLOAT_BUFFER_SIZE];
unsafe
{
fixed (byte* ptrBuffer = input)
{
float* ptrFloatBuffer = (float*)ptrBuffer;
for (int i = 0; i < FLOAT_BUFFER_SIZE; i++)
{
fb[i] = *ptrFloatBuffer;
ptrFloatBuffer++;
}
}
}
return fb;
}
public int Read(byte[] buffer, int offset = 0, int count = 0)
{
if (this.waveBuffer == null || this.waveBuffer.MaxSize < count)
this.waveBuffer = new WaveBuffer(count);
int readBytes = this.source.Read(this.waveBuffer, 0, count);
if (readBytes > 0) readBytes = count;
int frames = readBytes / sizeof(float);
this.Pitch = this.DeterminePitch(this.waveBuffer.FloatBuffer, frames);
return frames * 4;
}
Strangely enough, when it enters the constructor, waveBuffer contains some data (255, 1, 0, etc.), but when I check the "buffer" parameter of Read(), it's entirely 0. Every element.
Out of curiosity also, why does Read() have a buffer parameter, but isn't actually used in the method at all (I got that piece of code from one of your articles)?
Any help to resolve this issue would be greatly appreciated! I've been at this for quite a while already, but can make no sense out of it.
Thanks,
Alain

It is not clear what article you are referring to and I am not familiar with this library. However, the Read method is clearly reading in your 'time-series'/or other data. From this, the buffer parameter you speak of is likely to be the padding length that you want to place on either end of your data set.
This padding is known as 'Zero Padding' and it pads your recorded signal with zeros (places n zeros on either end of the signal, where n is set according to the radix used). This allows one to use a longer FFT, which will produce a longer FFT resulting vector.
A longer FFT result has more frequency bins that are more closely spaced in frequency. But they will be essentially providing the same result as a high quality Sinc interpolation of a shorter non-zero-padded FFT of the original data.
This might result in a smoother looking spectrum when plotted without further interpolation.
For further in formation see
https://dsp.stackexchange.com/questions/741/why-should-i-zero-pad-a-signal-before-taking-the-fourier-transform
I hope this helps.

This is not really an answer to your question but I wrote a safe generic alternative to your array conversion function.
using System;
using System.Runtime.InteropServices;
public static class Extensions
{
public staitc TDestination[] Transform<TSource, TDestination>(
this TSource[] source)
where TSource : struct
where TDestination : struct
{
if (source.Length == 0)
{
return new TDestination[0];
}
var sourceSize = Marshal.SizeOf(typeof(TSource));
var destinationSize = Marshal.SizeOf(typeof(TDestination));
var byteLength = source.Length * sourceSize;
int remainder;
var destinationLength = Math.DivRem(
byteLength,
destinationSize,
out remainder);
if (remainder > 0)
{
destinationLength++;
}
var destination = new TDestination[destinationLength];
Buffer.BlockCopy(source, 0, destination, 0, byteLength);
return destination;
}
}
Which obviously, you could use like
var bytes = new byte[] { 1, 1, 2, 3, 5, 8, 13, 21 };
var floats = bytes.Transform<byte, float>();

Related

NAudio : Keeping last ex: 5s of recorded audio and save it anytime

I'm coding a soundboard using NAudio as a library to manage my audio. One of the features I want to implement is the ability to be continuously recording one (or many) audio inputs and be able, at any time to save them.
The way I see this possible is by keeping a circular buffer of the last ex: 5s of samples picked up by the audio input.
I also want to avoid keeping all the data up to the point when it started as I don't want to overuse memory.
I've tried many approaches to this problem:
Using a circular buffer and feed it with data from "DataAvailable" event;
Using "Queue<ISampleProvider>" and "BufferedWaveProvider" to add the buffer data and transform it into a sample.
I tried using 2 "BufferedWaveProvider" and alternating which was getting filled depending on which was full.
I also tried to use 2 wave inputs and timers to alternate which was recording.
I tried using an array of bytes and use it as a circular buffer. I filled the buffer using the "DataAvailable" event from "WaveInEvent". The "WaveInEventArgs" has a buffer so I added the data from it to the circular buffer.
private int _start = 0, _end = 0;
private bool _filled = false;
private byte[] _buffer; // the size was set in the constructor
// its an equation to figure out how many samples
// a certain time needs.
private void _dataAvailable(object sender, WaveInEventArgs e)
{
for (int i = 0; i < e.BytesRecorded; i++)
{
if (_filled)
{
_start = _end + 1 > _buffer.Length - 1 ? _end + 1 : 0;
}
if (_end > _buffer.Length - 1 && !_filled) _filled = true;
_end = _end > _buffer.Length - 1 ? _end + 1 : 0;
_buffer[_end] = e.Buffer[i];
}
}
Some of the attempts I made kind of worked but, most of the time, they would work for the first 5 seconds (I am aware that using a "BufferredWaveProvider" can cause that issue. I think that part of the problem is that there is a certain amount of data that is required at the beginning of the buffer, and as soon as the buffer starts overwriting that data, the audio player doesn't understand it anymore.
Another very possible cause of the problem is that I'm just starting to use NAudio and don't quite understand it fully yet.
I've been stuck with this issue for a while now and I appreciate all the help anyone can give me.
I have some more code that I could add, but I thought this question was already getting long.
Thank you in advance!
If anyone else wants to do something similar, I'm leaving the whole class. Use it as you want.
using System;
using NAudio.Wave;
using System.Diagnostics;
public class AudioRecorder
{
public WaveInEvent MyWaveIn;
public readonly double RecordTime;
private WaveOutEvent _wav = new WaveOutEvent();
private bool _isFull = false;
private int _pos = 0;
private byte[] _buffer;
private bool _isRecording = false;
/// <summary>
/// Creates a new recorder with a buffer
/// </summary>
/// <param name="recordTime">Time to keep in buffer (in seconds)</param>
public AudioRecorder(double recordTime)
{
RecordTime = recordTime;
MyWaveIn = new WaveInEvent();
MyWaveIn.DataAvailable += DataAvailable;
_buffer = new byte[(int)(MyWaveIn.WaveFormat.AverageBytesPerSecond * RecordTime)];
}
/// <summary>
/// Starts recording
/// </summary>
public void StartRecording()
{
if (!_isRecording)
{
try
{
MyWaveIn.StartRecording();
}
catch (InvalidOperationException)
{
Debug.WriteLine("Already recording!");
}
}
_isRecording = true;
}
/// <summary>
/// Stops recording
/// </summary>
public void StopRecording()
{
MyWaveIn.StopRecording();
_isRecording = false;
}
/// <summary>
/// Play currently recorded data
/// </summary>
public void PlayRecorded()
{
if (_wav.PlaybackState == PlaybackState.Stopped)
{
var buff = new BufferedWaveProvider(MyWaveIn.WaveFormat);
var bytes = GetBytesToSave();
buff.AddSamples(bytes, 0, bytes.Length);
_wav.Init(buff);
_wav.Play();
}
}
/// <summary>
/// Stops replay
/// </summary>
public void StopReplay()
{
if (_wav != null) _wav.Stop();
}
/// <summary>
/// Save to disk
/// </summary>
/// <param name="fileName"></param>
public void Save(string fileName)
{
var writer = new WaveFileWriter(fileName, MyWaveIn.WaveFormat);
var buff = GetBytesToSave();
writer.Write(buff, 0 , buff.Length);
writer.Flush();
}
private void DataAvailable(object sender, WaveInEventArgs e)
{
for (int i = 0; i < e.BytesRecorded; ++i)
{
// save the data
_buffer[_pos] = e.Buffer[i];
// move the current position (advances by 1 OR resets to zero if the length of the buffer was reached)
_pos = (_pos + 1) % _buffer.Length;
// flag if the buffer is full (will only set it from false to true the first time that it reaches the full length of the buffer)
_isFull |= (_pos == 0);
}
}
public byte[] GetBytesToSave()
{
int length = _isFull ? _buffer.Length : _pos;
var bytesToSave = new byte[length];
int byteCountToEnd = _isFull ? (_buffer.Length - _pos) : 0;
if (byteCountToEnd > 0)
{
// bytes from the current position to the end
Array.Copy(_buffer, _pos, bytesToSave, 0, byteCountToEnd);
}
if (_pos > 0)
{
// bytes from the start to the current position
Array.Copy(_buffer, 0, bytesToSave, byteCountToEnd, _pos);
}
return bytesToSave;
}
/// <summary>
/// Starts recording if WaveIn stopped
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
private void Stopped(object sender, StoppedEventArgs e)
{
Debug.WriteLine("Recording stopped!");
if (e.Exception != null) Debug.WriteLine(e.Exception.Message);
if (_isRecording)
{
MyWaveIn.StartRecording();
}
}
}
The code inside your _dataAvailable method is strange to me. I would simply write bytes from start to end and then again from the start and so forth. And then when you want to get the actual bytes to save them, create a new array that goes from the current position to the end and from the start to the current position. Check my code below.
private int _pos = 0;
private bool _isFull = false;
private byte[] _buffer; // intialized in the constructor with the correct length
private void _dataAvailable(object sender, WaveInEventArgs e)
{
for(int i = 0; i < e.BytesRecorded; ++i) {
// save the data
_buffer[_pos] = e.Buffer[i];
// move the current position (advances by 1 OR resets to zero if the length of the buffer was reached)
_pos = (_pos + 1) % _buffer.Length;
// flag if the buffer is full (will only set it from false to true the first time that it reaches the full length of the buffer)
_isFull |= (_pos == 0);
}
}
public byte[] GetBytesToSave()
{
int length = _isFull ? _buffer.Length : _pos;
var bytesToSave = new byte[length];
int byteCountToEnd = _isFull ? (_buffer.Length - _pos) : 0;
if(byteCountToEnd > 0) {
// bytes from the current position to the end
Array.Copy(_buffer, _pos, bytesToSave, 0, byteCountToEnd);
}
if(_pos > 0) {
// bytes from the start to the current position
Array.Copy(_buffer, 0, bytesToSave, byteCountToEnd, _pos);
}
return bytesToSave;
}

Recording with AudioQueue and Monotouch static sound

I have written a small program in MonoTouch to record sound from the mic of my iPhone 4s using an InputAudioQueue.
I save the recorded data in an array and feed this buffer to the my audio player for playback (using OutputAudioQueue).
When playing back it's just some stuttering garbage / static sound. I have tried filling the buffer with sin waves before playback and then it sounds good, so I guess the problem is in the recording, not the playback. Can anyone help me see what is wrong? (Code below)
public class AQRecorder
{
private const int CountAudioBuffers = 3;
private const int AudioBufferLength = 22050;
private const int SampleRate = 44100;
private const int BitsPerChannel = 16;
private const int Channels = 1;
private const int MaxRecordingTime = 5;
private AudioStreamBasicDescription audioStreamDescription;
private InputAudioQueue inputQueue;
private short[] rawData;
private int indexNextRawData;
public AQRecorder ()
{
this.audioStreamDescription.Format = AudioFormatType.LinearPCM;
this.audioStreamDescription.FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger |
AudioFormatFlags.LinearPCMIsPacked;
this.audioStreamDescription.SampleRate = AQRecorder.SampleRate;
this.audioStreamDescription.BitsPerChannel = AQRecorder.BitsPerChannel;
this.audioStreamDescription.ChannelsPerFrame = AQRecorder.Channels;
this.audioStreamDescription.BytesPerFrame = (AQRecorder.BitsPerChannel / 8) * AQRecorder.Channels;
this.audioStreamDescription.FramesPerPacket = 1;
this.audioStreamDescription.BytesPerPacket = audioStreamDescription.BytesPerFrame * audioStreamDescription.FramesPerPacket;
this.audioStreamDescription.Reserved = 0;
}
public void Start ()
{
int totalBytesToRecord = this.audioStreamDescription.BytesPerFrame * AQRecorder.SampleRate * AQRecorder.MaxRecordingTime;
this.rawData = new short[totalBytesToRecord / sizeof(short)];
this.indexNextRawData = 0;
this.inputQueue = SetupInputQueue (this.audioStreamDescription);
this.inputQueue.Start ();
}
public void Stop ()
{
if (this.inputQueue.IsRunning)
{
this.inputQueue.Stop (true);
}
}
public short[] GetData ()
{
return this.rawData;;
}
private InputAudioQueue SetupInputQueue (AudioStreamBasicDescription audioStreamDescription)
{
InputAudioQueue inputQueue = new InputAudioQueue (audioStreamDescription);
for (int count = 0; count < AQRecorder.CountAudioBuffers; count++)
{
IntPtr bufferPointer;
inputQueue.AllocateBuffer(AQRecorder.AudioBufferLength, out bufferPointer);
inputQueue.EnqueueBuffer(bufferPointer, AQRecorder.AudioBufferLength, null);
}
inputQueue.InputCompleted += HandleInputCompleted;
return inputQueue;
}
private void HandleInputCompleted (object sender, InputCompletedEventArgs e)
{
unsafe
{
short* shortPtr = (short*)e.IntPtrBuffer;
for (int count = 0; count < AQRecorder.AudioBufferLength; count += sizeof(short))
{
if (indexNextRawData >= this.rawData.Length)
{
this.inputQueue.Stop (true);
return;
}
this.rawData [indexNextRawData] = *shortPtr;
indexNextRawData++;
shortPtr++;
}
}
this.inputQueue.EnqueueBuffer(e.IntPtrBuffer, AQRecorder.AudioBufferLength, null);
}
}
ok, this might be too late, but I had the same problem with hearing garbage sound only and found the solution.
You cannot read the audio data directly from e.IntPtrBuffer. This pointer is a pointer to a AudioQueueBuffer object and not to the audio data itself. So to read the audio data you can make use of the e.UnsafeBuffer which gives you the access to this object and use its AudioData pointer. This is a IntPtr which you can cast (in unsafe context) to a byte* or short* and you have your audio data.
Best regards
Alex

Playing sinus through XAudio2

I'm making an audio player using XAudio2. We are streaming data in packets of 640 bytes, at a sample rate of 8000Hz and sample depth of 16 bytes. We are using SlimDX to access XAudio2.
But when playing sound, we are noticing that the sound quality is bad. This, for example, is a 3KHz sine curve, captured with Audacity.
I have condensed the audio player to the bare basics, but the audio quality is still bad. Is this a bug in XAudio2, SlimDX, or my code, or is this simply an artifact that occurs when one go from 8KHz to 44.1KHz? The last one seems unreasonable, as we also generate PCM wav files which are played perfectly by Windows Media Player.
The following is the basic implementation, which generates the broken Sine.
public partial class MainWindow : Window
{
private XAudio2 device = new XAudio2();
private WaveFormatExtensible format = new WaveFormatExtensible();
private SourceVoice sourceVoice = null;
private MasteringVoice masteringVoice = null;
private Guid KSDATAFORMAT_SUBTYPE_PCM = new Guid("00000001-0000-0010-8000-00aa00389b71");
private AutoResetEvent BufferReady = new AutoResetEvent(false);
private PlayBufferPool PlayBuffers = new PlayBufferPool();
public MainWindow()
{
InitializeComponent();
Closing += OnClosing;
format.Channels = 1;
format.BitsPerSample = 16;
format.FormatTag = WaveFormatTag.Extensible;
format.BlockAlignment = (short)(format.Channels * (format.BitsPerSample / 8));
format.SamplesPerSecond = 8000;
format.AverageBytesPerSecond = format.SamplesPerSecond * format.BlockAlignment;
format.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
}
private void OnClosing(object sender, CancelEventArgs cancelEventArgs)
{
sourceVoice.Stop();
sourceVoice.Dispose();
masteringVoice.Dispose();
PlayBuffers.Dispose();
}
private void button_Click(object sender, RoutedEventArgs e)
{
masteringVoice = new MasteringVoice(device);
PlayBuffer buffer = PlayBuffers.NextBuffer();
GenerateSine(buffer.Buffer);
buffer.AudioBuffer.AudioBytes = 640;
sourceVoice = new SourceVoice(device, format, VoiceFlags.None, 8);
sourceVoice.BufferStart += new EventHandler<ContextEventArgs>(sourceVoice_BufferStart);
sourceVoice.BufferEnd += new EventHandler<ContextEventArgs>(sourceVoice_BufferEnd);
sourceVoice.SubmitSourceBuffer(buffer.AudioBuffer);
sourceVoice.Start();
}
private void sourceVoice_BufferEnd(object sender, ContextEventArgs e)
{
BufferReady.Set();
}
private void sourceVoice_BufferStart(object sender, ContextEventArgs e)
{
BufferReady.WaitOne(1000);
PlayBuffer nextBuffer = PlayBuffers.NextBuffer();
nextBuffer.DataStream.Position = 0;
nextBuffer.AudioBuffer.AudioBytes = 640;
GenerateSine(nextBuffer.Buffer);
Result r = sourceVoice.SubmitSourceBuffer(nextBuffer.AudioBuffer);
}
private void GenerateSine(byte[] buffer)
{
double sampleRate = 8000.0;
double amplitude = 0.25 * short.MaxValue;
double frequency = 3000.0;
for (int n = 0; n < buffer.Length / 2; n++)
{
short[] s = { (short)(amplitude * Math.Sin((2 * Math.PI * n * frequency) / sampleRate)) };
Buffer.BlockCopy(s, 0, buffer, n * 2, 2);
}
}
}
public class PlayBuffer : IDisposable
{
#region Private variables
private IntPtr BufferPtr;
private GCHandle BufferHandle;
#endregion
#region Constructors
public PlayBuffer()
{
Index = 0;
Buffer = new byte[640 * 4]; // 640 = 30ms
BufferHandle = GCHandle.Alloc(this.Buffer, GCHandleType.Pinned);
BufferPtr = new IntPtr(BufferHandle.AddrOfPinnedObject().ToInt32());
DataStream = new DataStream(BufferPtr, 640 * 4, true, false);
AudioBuffer = new AudioBuffer();
AudioBuffer.AudioData = DataStream;
}
public PlayBuffer(int index)
: this()
{
Index = index;
}
#endregion
#region Destructor
~PlayBuffer()
{
Dispose();
}
#endregion
#region Properties
protected int Index { get; private set; }
public byte[] Buffer { get; private set; }
public DataStream DataStream { get; private set; }
public AudioBuffer AudioBuffer { get; private set; }
#endregion
#region Public functions
public void Dispose()
{
if (AudioBuffer != null)
{
AudioBuffer.Dispose();
AudioBuffer = null;
}
if (DataStream != null)
{
DataStream.Dispose();
DataStream = null;
}
}
#endregion
}
public class PlayBufferPool : IDisposable
{
#region Private variables
private int _currentIndex = -1;
private PlayBuffer[] _buffers = new PlayBuffer[2];
#endregion
#region Constructors
public PlayBufferPool()
{
for (int i = 0; i < 2; i++)
Buffers[i] = new PlayBuffer(i);
}
#endregion
#region Desctructor
~PlayBufferPool()
{
Dispose();
}
#endregion
#region Properties
protected int CurrentIndex
{
get { return _currentIndex; }
set { _currentIndex = value; }
}
protected PlayBuffer[] Buffers
{
get { return _buffers; }
set { _buffers = value; }
}
#endregion
#region Public functions
public void Dispose()
{
for (int i = 0; i < Buffers.Length; i++)
{
if (Buffers[i] == null)
continue;
Buffers[i].Dispose();
Buffers[i] = null;
}
}
public PlayBuffer NextBuffer()
{
CurrentIndex = (CurrentIndex + 1) % Buffers.Length;
return Buffers[CurrentIndex];
}
#endregion
}
Some extra details:
This is used to replay recorded voice with various compression such as ALAW, µLAW or TrueSpeech. The data is sent in small packets, decoded and sent to this player. This is the reason for why we're using so low sampling rate, and so small buffers.
There are no problems with our data, however, as generating a WAV file with the data results in perfect replay by WMP or VLC.
edit: We have now "solved" this by rewriting the player in NAudio.
I'd still be interested in any input as to what is happening here. Is it our approach in the PlayBuffers, or is it simply a bug/limitation in DirectX, or the wrappers? I tried using SharpDX instead of SlimDX, but that did not change the result anything.
It looks as if the upsampling is done without a proper anti-aliasing (reconstruction) filter. The cutoff frequency is far too high (above the original Nyquist frequency) and therefore a lot of the aliases are being preserved, resulting in output resembling piecewise-linear interpolation between the samples taken at 8000 Hz.
Although all your different options are doing an upconversion from 8kHz to 44.1kHz, the way in which they do that is important, and the fact that one library does it well is no proof that the upconversion is not the source of error in the other.
It's been a while since I worked with sound and frequencies, but here is what I remember: You have a sample rate of 8000Hz and want a sine frequency of 3000Hz. So for 1 second you have 8000 samples and in that second you want your sine to oscillate 3000 times. That is below the Nyquist-frequency (half your sample rate) but barely (see Nyquist–Shannon sampling theorem). So I would not expect a good quality here.
In fact: step through the GenerateSine-method and you'll see that s[0] will contain the values 0, 5792, -8191, 5792, 0, -5792, 8191, -5792, 0, 5792...
None the less this doesn't explain the odd sine you recorded back and I'm not sure how much samples the human ear need to hear a "good" sine wave.

Playing sine wave for unknown time

Whole day I was looking for some tutorial or piece of code, "just" to play simple sin wave for "infinity" time. I know it sounds a little crazy.
But I want to be able to change frequency of tone in time, for instance - increase it.
Imagine that I want to play tone A, and increase it to C in "+5" frequency steps each 3ms (it's really just example), don't want to have free places, stop the tone.
Is it possible? Or can you help me?
Use NAudio library for audio output.
Make notes wave provider:
class NotesWaveProvider : WaveProvider32
{
public NotesWaveProvider(Queue<Note> notes)
{
this.Notes = notes;
}
public readonly Queue<Note> Notes;
int sample = 0;
Note NextNote()
{
for (; ; )
{
if (Notes.Count == 0)
return null;
var note = Notes.Peek();
if (sample < note.Duration.TotalSeconds * WaveFormat.SampleRate)
return note;
Notes.Dequeue();
sample = 0;
}
}
public override int Read(float[] buffer, int offset, int sampleCount)
{
int sampleRate = WaveFormat.SampleRate;
for (int n = 0; n < sampleCount; n++)
{
var note = NextNote();
if (note == null)
buffer[n + offset] = 0;
else
buffer[n + offset] = (float)(note.Amplitude * Math.Sin((2 * Math.PI * sample * note.Frequency) / sampleRate));
sample++;
}
return sampleCount;
}
}
class Note
{
public float Frequency;
public float Amplitude = 1.0f;
public TimeSpan Duration = TimeSpan.FromMilliseconds(50);
}
start play:
WaveOut waveOut;
this.Notes = new Queue<Note>(new[] { new Note { Frequency = 1000 }, new Note { Frequency = 1100 } });
var waveProvider = new NotesWaveProvider(Notes);
waveProvider.SetWaveFormat(16000, 1); // 16kHz mono
waveOut = new WaveOut();
waveOut.Init(waveProvider);
waveOut.Play();
add new notes:
void Timer_Tick(...)
{
if (Notes.Count < 10)
Notes.Add(new Note{Frecuency = 900});
}
ps this code is idea only. for real using add mt-locking etc
use NAudio and SineWaveProvider32: http://mark-dot-net.blogspot.com/2009/10/playback-of-sine-wave-in-naudio.html
private WaveOut waveOut;
private void button1_Click(object sender, EventArgs e)
{
StartStopSineWave();
}
private void StartStopSineWave()
{
if (waveOut == null)
{
var sineWaveProvider = new SineWaveProvider32();
sineWaveProvider.SetWaveFormat(16000, 1); // 16kHz mono
sineWaveProvider.Frequency = 1000;
sineWaveProvider.Amplitude = 0.25f;
waveOut = new WaveOut();
waveOut.Init(sineWaveProvider);
waveOut.Play();
}
else
{
waveOut.Stop();
waveOut.Dispose();
waveOut = null;
}
}

Playing WAVE file in C# using DirectX and threading?

at the moment im trying to figure out how i can manage to play a wave file in C# by filling up the secondary buffer with data from the wave file through threading and then play the wave file.
Any help or sample coding i can use?
thanks
sample code being used:
public delegate void PullAudio(short[] buffer, int length);
public class SoundPlayer : IDisposable
{
private Device soundDevice;
private SecondaryBuffer soundBuffer;
private int samplesPerUpdate;
private AutoResetEvent[] fillEvent = new AutoResetEvent[2];
private Thread thread;
private PullAudio pullAudio;
private short channels;
private bool halted;
private bool running;
public SoundPlayer(Control owner, PullAudio pullAudio, short channels)
{
this.channels = channels;
this.pullAudio = pullAudio;
this.soundDevice = new Device();
this.soundDevice.SetCooperativeLevel(owner, CooperativeLevel.Priority);
// Set up our wave format to 44,100Hz, with 16 bit resolution
WaveFormat wf = new WaveFormat();
wf.FormatTag = WaveFormatTag.Pcm;
wf.SamplesPerSecond = 44100;
wf.BitsPerSample = 16;
wf.Channels = channels;
wf.BlockAlign = (short)(wf.Channels * wf.BitsPerSample / 8);
wf.AverageBytesPerSecond = wf.SamplesPerSecond * wf.BlockAlign;
this.samplesPerUpdate = 512;
// Create a buffer with 2 seconds of sample data
BufferDescription bufferDesc = new BufferDescription(wf);
bufferDesc.BufferBytes = this.samplesPerUpdate * wf.BlockAlign * 2;
bufferDesc.ControlPositionNotify = true;
bufferDesc.GlobalFocus = true;
this.soundBuffer = new SecondaryBuffer(bufferDesc, this.soundDevice);
Notify notify = new Notify(this.soundBuffer);
fillEvent[0] = new AutoResetEvent(false);
fillEvent[1] = new AutoResetEvent(false);
// Set up two notification events, one at halfway, and one at the end of the buffer
BufferPositionNotify[] posNotify = new BufferPositionNotify[2];
posNotify[0] = new BufferPositionNotify();
posNotify[0].Offset = bufferDesc.BufferBytes / 2 - 1;
posNotify[0].EventNotifyHandle = fillEvent[0].Handle;
posNotify[1] = new BufferPositionNotify();
posNotify[1].Offset = bufferDesc.BufferBytes - 1;
posNotify[1].EventNotifyHandle = fillEvent[1].Handle;
notify.SetNotificationPositions(posNotify);
this.thread = new Thread(new ThreadStart(SoundPlayback));
this.thread.Priority = ThreadPriority.Highest;
this.Pause();
this.running = true;
this.thread.Start();
}
public void Pause()
{
if (this.halted) return;
this.halted = true;
Monitor.Enter(this.thread);
}
public void Resume()
{
if (!this.halted) return;
this.halted = false;
Monitor.Pulse(this.thread);
Monitor.Exit(this.thread);
}
private void SoundPlayback()
{
lock (this.thread)
{
if (!this.running) return;
// Set up the initial sound buffer to be the full length
int bufferLength = this.samplesPerUpdate * 2 * this.channels;
short[] soundData = new short[bufferLength];
// Prime it with the first x seconds of data
this.pullAudio(soundData, soundData.Length);
this.soundBuffer.Write(0, soundData, LockFlag.None);
// Start it playing
this.soundBuffer.Play(0, BufferPlayFlags.Looping);
int lastWritten = 0;
while (this.running)
{
if (this.halted)
{
Monitor.Pulse(this.thread);
Monitor.Wait(this.thread);
}
// Wait on one of the notification events
WaitHandle.WaitAny(this.fillEvent, 3, true);
// Get the current play position (divide by two because we are using 16 bit samples)
int tmp = this.soundBuffer.PlayPosition / 2;
// Generate new sounds from lastWritten to tmp in the sound buffer
if (tmp == lastWritten)
{
continue;
}
else
{
soundData = new short[(tmp - lastWritten + bufferLength) % bufferLength];
}
this.pullAudio(soundData, soundData.Length);
// Write in the generated data
soundBuffer.Write(lastWritten * 2, soundData, LockFlag.None);
// Save the position we were at
lastWritten = tmp;
}
}
}
public void Dispose()
{
this.running = false;
this.Resume();
if (this.soundBuffer != null)
{
this.soundBuffer.Dispose();
}
if (this.soundDevice != null)
{
this.soundDevice.Dispose();
}
}
}
}
The concept is the same that im using but i can't manage to get a set on wave byte [] data to play
I have not done this.
But the first place i would look is XNA.
I know that the c# managed directx project was ditched in favor of XNA and i have found it to be good for graphics - i prefer using it to directx.
what is the reason that you decided not to just use soundplayer, as per this msdn entry below?
private SoundPlayer Player = new SoundPlayer();
private void loadSoundAsync()
{
// Note: You may need to change the location specified based on
// the location of the sound to be played.
this.Player.SoundLocation = http://www.tailspintoys.com/sounds/stop.wav";
this.Player.LoadAsync();
}
private void Player_LoadCompleted (
object sender,
System.ComponentModel.AsyncCompletedEventArgs e)
{
if (this.Player.IsLoadCompleted)
{
this.Player.PlaySync();
}
}
usually i just load them all up in a thread, or asynch delegate, then play or playsynch them when needed.
You can use the DirectSound support in SlimDX: http://slimdx.org/ :-)
You can use nBASS or better FMOD both are great audio libraries and can work nicely together with .NET.
DirectSound is where you want to go. It's a piece of cake to use, but I'm not sure what formats it can play besides .wav
http://msdn.microsoft.com/en-us/library/windows/desktop/ee416960(v=vs.85).aspx

Categories