Playing live audio stream from byte[] packet - c#

I'm trying to play sound provided by an Ethernet microphone.
The device stream live audio via udp packets, that I read in a network receiver thread :
MemoryStream msAudio = new MemoryStream();
private void process_stream(byte[] buffer)
{
msAudio.Write(fragment, 0, fragment.Length);
}
process_stream is called in a task
Then I have another task to play the stream in NAudio (NAudio isn't mandatory) :
while (IsConnected)
{
msAudio.Position = 0;
var waveFormat = new WaveFormat(8000, 16, 1); // Same format
using (WaveStream blockAlignedStream = new BlockAlignReductionStream(
WaveFormatConversionStream.CreatePcmStream(
new RawSourceWaveStream(msAudio , waveFormat))))
{
using (WaveOut waveOut = new WaveOut(WaveCallbackInfo.FunctionCallback()))
{
waveOut.Init(blockAlignedStream);
waveOut.Play();
while (waveOut.PlaybackState == PlaybackState.Playing)
{
System.Threading.Thread.Sleep(100);
}
}
}
}
My problems is :
I hear Toc Toc Toc noise (about 4 times by second)
The audio is sloooowed, voice is deformed like bitrate is too low (but 8khz is correct)
The audio is looped, I think i have to flush my stream, but I don't see were...
Thanks a lot if you can tell me some advice...
P.S :
for helping, original code is working in android using AudioTrack. the code is here
P.S 2 : Here the "image" of the audio noise that I have :

I don't like to respond myself.. but I think that my question is too specific...
So I resolve my problems with :
Using BufferedWaveProvider in place of a memoryStream
Doubling SampleRate (16KHz)
Removing first 4 byte of the buffer
Now My code look like :
public ctor()
{
var waveFormat = new WaveFormat(16000, 16, 1);
buffer = new BufferedWaveProvider(waveFormat)
{
BufferDuration = TimeSpan.FromSeconds(10),
DiscardOnBufferOverflow = true
};
}
internal void OnDataReceived(byte[] currentFrame)
{
if (mPlaying && mAudioTrack != null)
{
buffer.AddSamples(currentFrame, 4, currentFrame.Length);
}
}
internal void ConfigureCodec()
{
mAudioTrack = new WaveOut(WaveCallbackInfo.FunctionCallback());
mAudioTrack.Init(buffer);
if (mPlaying)
{
mAudioTrack.Play();
}
}
No more Thread....

Related

C#+VideoLan.LibVLC Sample Audio Stream and save it to disk

I am using VideoLan.VLC to get 20 seconds of an audio stream every 30 seconds.
I have a loop, something like
private LibVLC libvlc = new LibVLC();
private MediaPlayer mediaPlayer = null;
private Media Media = null;
private string AudioSampleFileName { get {
return "audio_" + DateTime.Now.ToString("yyyyMMdd_HHmmss")+
".ts";
} }
public void Start(){
mediaPlayer = new MediaPlayer(libvlc);
while(...){
Get_20_seconds_audio_sample();
Wait_30_Seconds();
}
}
public void Get_sample(Uri playPathUri, string FileName)
{
var currentDirectory = Path.GetDirectoryName(Assembly.GetEntryAssembly().Location);
var destination = Path.Combine(currentDirectory, FileName);
var mediaOptions = new string[]
{
":sout=#file{dst=" + destination + ",channels=1,samplerate=16000}",
":sout-keep"
};
if (Media == null)
{
Media = new Media(libvlc, playPathUri, mediaOptions);
mediaPlayer.Media = Media;
}
else {
Media.AddOption(":sout=#file{dst=" + destination + ",channels=1,samplerate=16000}");
}
mediaPlayer.Play();
}
public void Get_20_seconds_audio_sample(){
Get_sample(RadioURI,AudioSampleFileName);
Wait_20_seconds();
Stop();
}
public void Stop()
{
mediaPlayer.Stop();
}
The Problem is that radio streming ususally starts with a commercial that lasts about 25 seconds. Every sample plays the commercial. It seems that Stop() closes the stream until Play() is called again and it restart the stream. I tired to pause the audio but, well...it makes no much sense to pause and play.
I can accept to get the commercial only in the first sample but then I want the regular radio audio. Is there a way not to close the stream every time? I am not tied to VideoLan dll so I can start from scratch if you have a better way to do it.
Use Audio callbacks and save the audio stream yourself. Full sample:
class Program
{
// This sample shows you how you can use SetAudioFormatCallback and SetAudioCallbacks. It does two things:
// 1) Play the sound from the specified video using NAudio
// 2) Extract the sound into a file using NAudio
static void Main(string[] args)
{
Core.Initialize();
using var libVLC = new LibVLC(enableDebugLogs: true);
using var media = new Media(libVLC,
new Uri("http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ElephantsDream.mp4"),
":no-video");
using var mediaPlayer = new MediaPlayer(media);
using var outputDevice = new WaveOutEvent();
var waveFormat = new WaveFormat(8000, 16, 1);
var writer = new WaveFileWriter("sound.wav", waveFormat);
var waveProvider = new BufferedWaveProvider(waveFormat);
outputDevice.Init(waveProvider);
mediaPlayer.SetAudioFormatCallback(AudioSetup, AudioCleanup);
mediaPlayer.SetAudioCallbacks(PlayAudio, PauseAudio, ResumeAudio, FlushAudio, DrainAudio);
mediaPlayer.Play();
mediaPlayer.Time = 20_000; // Seek the video 20 seconds
outputDevice.Play();
Console.WriteLine("Press 'q' to quit. Press any other key to pause/play.");
while (true)
{
if (Console.ReadKey().KeyChar == 'q')
break;
if (mediaPlayer.IsPlaying)
mediaPlayer.Pause();
else
mediaPlayer.Play();
}
void PlayAudio(IntPtr data, IntPtr samples, uint count, long pts)
{
int bytes = (int)count * 2; // (16 bit, 1 channel)
var buffer = new byte[bytes];
Marshal.Copy(samples, buffer, 0, bytes);
waveProvider.AddSamples(buffer, 0, bytes);
writer.Write(buffer, 0, bytes);
}
int AudioSetup(ref IntPtr opaque, ref IntPtr format, ref uint rate, ref uint channels)
{
channels = (uint)waveFormat.Channels;
rate = (uint)waveFormat.SampleRate;
return 0;
}
void DrainAudio(IntPtr data)
{
writer.Flush();
}
void FlushAudio(IntPtr data, long pts)
{
writer.Flush();
waveProvider.ClearBuffer();
}
void ResumeAudio(IntPtr data, long pts)
{
outputDevice.Play();
}
void PauseAudio(IntPtr data, long pts)
{
outputDevice.Pause();
}
void AudioCleanup(IntPtr opaque) { }
}
}

Playback of FFMPEG Audio Using NAudio Has Lots of Noise

I have a video that I'm trying to play back using the FFMPEG framework, and NAudio to playback the audio frames. The video plays back just fine, but when I play back the audio using a BufferedWaveProvider while the underlying audio sounds correct, it has a ton of static/noise. What's important to note in the attached code is when the events fire off from the managed C++ class, a C# class receives the event and then grabs the latest relevant AVFrame data. This works flawlessly for video, but like I said, when I try to play back the audio I have a bunch of noise. For now I've hardcoded the NAudio WaveFormat settings, but they should match the video's audio.
///////////////////////// C# /////////////////////////////
// Gets the size of the data from the audio AVFrame (audioFrame->linesize[0])
int size = _Decoder.GetAVAudioFrameSize();
if (!once)
{
managedArray = new byte[size];
once = true;
}
// Gets the data array from the audio AVFrame
IntPtr data = _Decoder.GetAVAudioFrameData();
Marshal.Copy(data, managedArray, 0, size);
if (_waveProvider == null)
{
_waveProvider = new BufferedWaveProvider(new WaveFormat(44100, 16, 2));
_waveOut.NumberOfBuffers = 2;
_waveOut.DesiredLatency = 300;
_waveOut.Init(_waveProvider);
_waveOut.Play();
}
_waveProvider.AddSamples(managedArray, 0, size);
//////////////////////// Managed C++ /////////////////////////
while (_shouldRead)
{
// if we're not initialized, sleep
if (!_initialized || !_FFMpegObject->AVFormatContextReady())
{
Thread::Sleep(READ_INIT_WAIT_TIME);
}
else if (_sequentialFailedReadCount > MAX_SEQUENTIAL_READ_FAILS)
{
// we've failed a lot and probably lost the stream, try to reconnect.
_sequentialFailedReadCount = 0;
_initialized = false;
StartInitThread();
Thread::Sleep(READ_INIT_WAIT_TIME << 1);
}
else // otherwise, try reading one AV packet
{
if (_FFMpegObject->AVReadFrame())
{
if (_FFMpegObject->GetAVPacketStreamIndex() == _videoStreamIndex)
{
_sequentialFailedReadCount = 0;
// decode the video packet
_frameLock->WaitOne();
frameFinished = _FFMpegObject->AVCodecDecodeVideo2();
_frameLock->ReleaseMutex();
if (frameFinished)
{
_framesDecoded++;
if (!_isPaused) // If paused and AFAP playback, just don't call the callback
{
FrameFinished(this, EventArgs::Empty);
Thread::Sleep((int)(1000 / (GetFrameRate() * _speed)));
}
}
}
else if (_FFMpegObject->GetAVPacketStreamIndex() == _audioStreamIndex)
{
// decode the audio packet.
_frameLock->WaitOne();
audioFrameFinished = _FFMpegObject->AVCodecDecodeAudio4();
_frameLock->ReleaseMutex();
if (audioFrameFinished)
{
if (!_isPaused) // If paused and AFAP playback, just don't call the callback
{
// Fire the event - leaving it up to video to do the sleeps -- not sure if that will work or not
AudioFrameFinished(this, EventArgs::Empty);
}
}
}
_FFMpegObject->AVFreeFramePacket();
}
else // failed to read an AV packet
{
_sequentialFailedReadCount++;
_failedReadCount++;
Thread::Sleep(READ_FAILED_WAIT_TIME);
}
}
}

How to get float array of samples from audio file

I’m developing a UWP application ( for Windows 10) which works with audio data. It receives samples buffer at the start in the form of a float array of samples, which items are changing from -1f to 1f.
Earlier I used NAudio.dll 1.8.0 that gives all necessary functionality.
Worked with WaveFileReader, waveBuffer.FloatBuffer, WaveFileWriter classes.
However, when I finished this app and tried to build Release version, got this error:
ILT0042: Arrays of pointer types are not currently supported: 'System.Int32*[]'.
I’ve tried to solve it:
https://forums.xamarin.com/discussion/73169/uwp-10-build-fail-arrays-of-pointer-types-error
There is advice to remove the link to .dll, but I need it.
I’ve tried to install NAudio the same version using Manage NuGet Packages, but WaveFileReader, WaveFileWriter is not available.
In NAudio developer’s answer (How to store a .wav file in Windows 10 with NAudio) I’ve read about using AudioGraph, but I can build float array of samples only in the realtime playback, but I need get the full samples to pack right after the audio file uploading. Example of getting samples during the recording process or playback:
https://learn.microsoft.com/ru-ru/windows/uwp/audio-video-camera/audio-graphs
That’s why I need help: how to get FloatBuffer for working with samples after audio file uploading? For example, for building audio waves or calculation for audio effects applying.
Thank you in advance.
I’ve tried to use FileStream and BitConverter.ToSingle(), however, I had a different result compared to NAudio.
In other words, I’m still looking for a solution.
private float[] GetBufferArray()
{
string _path = ApplicationData.Current.LocalFolder.Path.ToString() + "/track_1.mp3";
FileStream _stream = new FileStream(_path, FileMode.Open);
BinaryReader _binaryReader = new BinaryReader(_stream);
int _dataSize = _binaryReader.ReadInt32();
byte[] _byteBuffer = _binaryReader.ReadBytes(_dataSize);
int _sizeFloat = sizeof(float);
float[] _floatBuffer = new float[_byteBuffer.Length / _sizeFloat];
for (int i = 0, j = 0; i < _byteBuffer.Length - _sizeFloat; i += _sizeFloat, j++)
{
_floatBuffer[j] = BitConverter.ToSingle(_byteBuffer, i);
}
return _floatBuffer;
}
Another way to read samples from an audio file in UWP is using AudioGraph API. It will work for all audio formats that Windows10 supports
Here is a sample code
namespace AudioGraphAPI_read_samples_from_file
{
// App opens a file using FileOpenPicker and reads samples into array of
// floats using AudioGragh API
// Declare COM interface to access AudioBuffer
[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}
public sealed partial class MainPage : Page
{
StorageFile mediaFile;
AudioGraph audioGraph;
AudioFileInputNode fileInputNode;
AudioFrameOutputNode frameOutputNode;
/// <summary>
/// We are going to fill this array with audio samples
/// This app loads only one channel
/// </summary>
float[] audioData;
/// <summary>
/// Current position in audioData array for loading audio samples
/// </summary>
int audioDataCurrentPosition = 0;
public MainPage()
{
this.InitializeComponent();
}
private async void Open_Button_Click(object sender, RoutedEventArgs e)
{
// We ask user to pick an audio file
FileOpenPicker filePicker = new FileOpenPicker();
filePicker.SuggestedStartLocation = PickerLocationId.MusicLibrary;
filePicker.FileTypeFilter.Add(".mp3");
filePicker.FileTypeFilter.Add(".wav");
filePicker.FileTypeFilter.Add(".wma");
filePicker.FileTypeFilter.Add(".m4a");
filePicker.ViewMode = PickerViewMode.Thumbnail;
mediaFile = await filePicker.PickSingleFileAsync();
if (mediaFile == null)
{
return;
}
// We load samples from file
await LoadAudioFromFile(mediaFile);
// We wait 5 sec
await Task.Delay(5000);
if (audioData == null)
{
ShowMessage("Error loading samples");
return;
}
// After LoadAudioFromFile method finished we can use audioData
// For example we can find max amplitude
float max = audioData[0];
for (int i = 1; i < audioData.Length; i++)
if (Math.Abs(audioData[i]) > Math.Abs(max))
max = audioData[i];
ShowMessage("Maximum is " + max.ToString());
}
private async void ShowMessage(string Message)
{
var dialog = new MessageDialog(Message);
await dialog.ShowAsync();
}
private async Task LoadAudioFromFile(StorageFile file)
{
// We initialize an instance of AudioGraph
AudioGraphSettings settings =
new AudioGraphSettings(
Windows.Media.Render.AudioRenderCategory.Media
);
CreateAudioGraphResult result1 = await AudioGraph.CreateAsync(settings);
if (result1.Status != AudioGraphCreationStatus.Success)
{
ShowMessage("AudioGraph creation error: " + result1.Status.ToString());
}
audioGraph = result1.Graph;
if (audioGraph == null)
return;
// We initialize FileInputNode
CreateAudioFileInputNodeResult result2 =
await audioGraph.CreateFileInputNodeAsync(file);
if (result2.Status != AudioFileNodeCreationStatus.Success)
{
ShowMessage("FileInputNode creation error: " + result2.Status.ToString());
}
fileInputNode = result2.FileInputNode;
if (fileInputNode == null)
return;
// We read audio file encoding properties to pass them to FrameOutputNode creator
AudioEncodingProperties audioEncodingProperties = fileInputNode.EncodingProperties;
// We initialize FrameOutputNode and connect it to fileInputNode
frameOutputNode = audioGraph.CreateFrameOutputNode(audioEncodingProperties);
fileInputNode.AddOutgoingConnection(frameOutputNode);
// We add a handler achiving the end of a file
fileInputNode.FileCompleted += FileInput_FileCompleted;
// We add a handler which will transfer every audio frame into audioData
audioGraph.QuantumStarted += AudioGraph_QuantumStarted;
// We initialize audioData
int numOfSamples = (int)Math.Ceiling(
(decimal)0.0000001
* fileInputNode.Duration.Ticks
* fileInputNode.EncodingProperties.SampleRate
);
audioData = new float[numOfSamples];
audioDataCurrentPosition = 0;
// We start process which will read audio file frame by frame
// and will generated events QuantumStarted when a frame is in memory
audioGraph.Start();
}
private void FileInput_FileCompleted(AudioFileInputNode sender, object args)
{
audioGraph.Stop();
}
private void AudioGraph_QuantumStarted(AudioGraph sender, object args)
{
AudioFrame frame = frameOutputNode.GetFrame();
ProcessInputFrame(frame);
}
unsafe private void ProcessInputFrame(AudioFrame frame)
{
using (AudioBuffer buffer = frame.LockBuffer(AudioBufferAccessMode.Read))
using (IMemoryBufferReference reference = buffer.CreateReference())
{
// We get data from current buffer
((IMemoryBufferByteAccess)reference).GetBuffer(
out byte* dataInBytes,
out uint capacityInBytes
);
// We discard first frame; it's full of zeros because of latency
if (audioGraph.CompletedQuantumCount == 1) return;
float* dataInFloat = (float*)dataInBytes;
uint capacityInFloat = capacityInBytes / sizeof(float);
// Number of channels defines step between samples in buffer
uint step = fileInputNode.EncodingProperties.ChannelCount;
// We transfer audio samples from buffer into audioData
for (uint i = 0; i < capacityInFloat; i += step)
{
if (audioDataCurrentPosition < audioData.Length)
{
audioData[audioDataCurrentPosition] = dataInFloat[i];
audioDataCurrentPosition++;
}
}
}
}
}
}
Edited: It solves the problem because it reads samples from a file into a float array
First popular way of getting AudioData from Wav file.
Thanks to PI user’s answer How to read the data in a wav file to an array, I’ve solved the problem with wav file reading in float array in UWP project.
But file’s structure differs from standard one (maybe, only in my project there is such problem) when it records in wav file using AudioGraph. It leads to unpredictable result. We receive value1263424842 instead of predictable 544501094 getting format id. After that, all following values are displayed incorrectly. I’ve found out the correct id sequentially searching in the bytes. I realised that AudioGraph adds extra chunk of data to recorded wav file, but record’s format is still PCM. This extra chunk of data looks like the data about file format, but it contains also empty values, empty bytes. I can’t find any information about that, maybe somebody here knows? The solution from PI I’ve changed for my needs. That’s what I’ve got:
using (FileStream fs = File.Open(filename, FileMode.Open))
{
BinaryReader reader = new BinaryReader(fs);
int chunkID = reader.ReadInt32();
int fileSize = reader.ReadInt32();
int riffType = reader.ReadInt32();
int fmtID;
long _position = reader.BaseStream.Position;
while (_position != reader.BaseStream.Length-1)
{
reader.BaseStream.Position = _position;
int _fmtId = reader.ReadInt32();
if (_fmtId == 544501094) {
fmtID = _fmtId;
break;
}
_position++;
}
int fmtSize = reader.ReadInt32();
int fmtCode = reader.ReadInt16();
int channels = reader.ReadInt16();
int sampleRate = reader.ReadInt32();
int byteRate = reader.ReadInt32();
int fmtBlockAlign = reader.ReadInt16();
int bitDepth = reader.ReadInt16();
int fmtExtraSize;
if (fmtSize == 18)
{
fmtExtraSize = reader.ReadInt16();
reader.ReadBytes(fmtExtraSize);
}
int dataID = reader.ReadInt32();
int dataSize = reader.ReadInt32();
byte[] byteArray = reader.ReadBytes(dataSize);
int bytesForSamp = bitDepth / 8;
int samps = dataSize / bytesForSamp;
float[] asFloat = null;
switch (bitDepth)
{
case 16:
Int16[] asInt16 = new Int16[samps];
Buffer.BlockCopy(byteArray, 0, asInt16, 0, dataSize);
IEnumerable<float> tempInt16 =
from i in asInt16
select i / (float)Int16.MaxValue;
asFloat = tempInt16.ToArray();
break;
default:
return false;
}
//For one channel wav audio
floatLeftBuffer.AddRange(asFloat);
From buffer to file record has inverse algorithm. At this moment this is the only one correct algorithm for working with wav files which allows to get audio data.
Used this article working with AudioGraph - https://learn.microsoft.com/ru-ru/windows/uwp/audio-video-camera/audio-graphs. Note that you can set up necessary data of record’s format with AudioEncodingQuality recirdung from MIC to file.
Second way of getting AudioData using NAudio from Nugget Packages.
I used MediaFoundationReader class.
float[] floatBuffer;
using (MediaFoundationReader media = new MediaFoundationReader(path))
{
int _byteBuffer32_length = (int)media.Length * 2;
int _floatBuffer_length = _byteBuffer32_length / sizeof(float);
IWaveProvider stream32 = new Wave16ToFloatProvider(media);
WaveBuffer _waveBuffer = new WaveBuffer(_byteBuffer32_length);
stream32.Read(_waveBuffer, 0, (int)_byteBuffer32_length);
floatBuffer = new float[_floatBuffer_length];
for (int i = 0; i < _floatBuffer_length; i++) {
floatBuffer[i] = _waveBuffer.FloatBuffer[i];
}
}
Comparing two ways I noticed:
Received values of samples differ on 1/1 000 000. I can’t say what way is more precise (if you know, will be glad to hear);
Second way of getting AudioData works for MP3 files, too.
If you’ve found any mistakes or have comments about that, welcome.
Import statement
using NAudio.Wave;
using NAudio.Wave.SampleProviders;
Inside function
AudioFileReader reader = new AudioFileReader(filename);
ISampleProvider isp = reader.ToSampleProvider();
float[] buffer = new float[reader.Length / 2];
isp.Read(buffer, 0, buffer.Length);
buffer array will be having 32 bit IEEE float samples.
This is using NAudio Nuget Package Visual Studio.

How properly play an audio file to SKYPE4COM

I'm developing the skype plugin. It should play an audio file during the call.
Audio player use naudio library
public class AudioPlayback : IDisposable, IPlayer
{
WaveStream _outStream;
IWavePlayer _player;
IWaveIn _recorder;
public event Action<byte[], int> DataAvailable;
public AudioPlayback()
{
_recorder = new WaveInEvent();
_recorder.WaveFormat = new WaveFormat(16000, 16, 1);
_recorder.DataAvailable += OnRecorderDataAvailable;
}
private void OnRecorderDataAvailable(object sender, WaveInEventArgs e)
{
if (DataAvailable!= null)
{
DataAvailable(e.Buffer, e.BytesRecorded);
}
}
public void LoadFile(string fileName)
{
_outStream = new Mp3FileReader(fileName);
if (_outStream.WaveFormat.Encoding != WaveFormatEncoding.Pcm)
{
_outStream = WaveFormatConversionStream.CreatePcmStream(_outStream);
}
}
private void CreatePlayer()
{
if (_player == null)
{
var waveOut = new WaveOut();
waveOut.DesiredLatency = 200;
waveOut.NumberOfBuffers = 2;
waveOut.DeviceNumber = 0;
_player = waveOut;
}
}
public void Play()
{
CreatePlayer();
if (_player.PlaybackState != PlaybackState.Playing)
{
if (_player.PlaybackState == PlaybackState.Stopped)
_recorder.StartRecording();
_player.Init(_outStream);
_player.Play();
}
}
}
In skype plugin class I create NetworkStream and TcpListener. Use event from player to get buffer data and write to the network stream
WriteToStream(buffer, 0, num);
On call started I change input for skype
call.InputDevice[TCallIoDeviceType.callIoDeviceTypeSoundcard] = "";
call.InputDevice[TCallIoDeviceType.callIoDeviceTypePort] = _inputPort.ToString();
Was fighting with this for several hours. Finally got the sound on the another skype only when turn on Stereo Mixer (Recording devices).
The question: is this the right way? I don't like a bit to play sound and capture it. But here I have a positive thing - I capture exactly with parameters appropriate for skype.

Playing WAVE file in C# using DirectX and threading?

at the moment im trying to figure out how i can manage to play a wave file in C# by filling up the secondary buffer with data from the wave file through threading and then play the wave file.
Any help or sample coding i can use?
thanks
sample code being used:
public delegate void PullAudio(short[] buffer, int length);
public class SoundPlayer : IDisposable
{
private Device soundDevice;
private SecondaryBuffer soundBuffer;
private int samplesPerUpdate;
private AutoResetEvent[] fillEvent = new AutoResetEvent[2];
private Thread thread;
private PullAudio pullAudio;
private short channels;
private bool halted;
private bool running;
public SoundPlayer(Control owner, PullAudio pullAudio, short channels)
{
this.channels = channels;
this.pullAudio = pullAudio;
this.soundDevice = new Device();
this.soundDevice.SetCooperativeLevel(owner, CooperativeLevel.Priority);
// Set up our wave format to 44,100Hz, with 16 bit resolution
WaveFormat wf = new WaveFormat();
wf.FormatTag = WaveFormatTag.Pcm;
wf.SamplesPerSecond = 44100;
wf.BitsPerSample = 16;
wf.Channels = channels;
wf.BlockAlign = (short)(wf.Channels * wf.BitsPerSample / 8);
wf.AverageBytesPerSecond = wf.SamplesPerSecond * wf.BlockAlign;
this.samplesPerUpdate = 512;
// Create a buffer with 2 seconds of sample data
BufferDescription bufferDesc = new BufferDescription(wf);
bufferDesc.BufferBytes = this.samplesPerUpdate * wf.BlockAlign * 2;
bufferDesc.ControlPositionNotify = true;
bufferDesc.GlobalFocus = true;
this.soundBuffer = new SecondaryBuffer(bufferDesc, this.soundDevice);
Notify notify = new Notify(this.soundBuffer);
fillEvent[0] = new AutoResetEvent(false);
fillEvent[1] = new AutoResetEvent(false);
// Set up two notification events, one at halfway, and one at the end of the buffer
BufferPositionNotify[] posNotify = new BufferPositionNotify[2];
posNotify[0] = new BufferPositionNotify();
posNotify[0].Offset = bufferDesc.BufferBytes / 2 - 1;
posNotify[0].EventNotifyHandle = fillEvent[0].Handle;
posNotify[1] = new BufferPositionNotify();
posNotify[1].Offset = bufferDesc.BufferBytes - 1;
posNotify[1].EventNotifyHandle = fillEvent[1].Handle;
notify.SetNotificationPositions(posNotify);
this.thread = new Thread(new ThreadStart(SoundPlayback));
this.thread.Priority = ThreadPriority.Highest;
this.Pause();
this.running = true;
this.thread.Start();
}
public void Pause()
{
if (this.halted) return;
this.halted = true;
Monitor.Enter(this.thread);
}
public void Resume()
{
if (!this.halted) return;
this.halted = false;
Monitor.Pulse(this.thread);
Monitor.Exit(this.thread);
}
private void SoundPlayback()
{
lock (this.thread)
{
if (!this.running) return;
// Set up the initial sound buffer to be the full length
int bufferLength = this.samplesPerUpdate * 2 * this.channels;
short[] soundData = new short[bufferLength];
// Prime it with the first x seconds of data
this.pullAudio(soundData, soundData.Length);
this.soundBuffer.Write(0, soundData, LockFlag.None);
// Start it playing
this.soundBuffer.Play(0, BufferPlayFlags.Looping);
int lastWritten = 0;
while (this.running)
{
if (this.halted)
{
Monitor.Pulse(this.thread);
Monitor.Wait(this.thread);
}
// Wait on one of the notification events
WaitHandle.WaitAny(this.fillEvent, 3, true);
// Get the current play position (divide by two because we are using 16 bit samples)
int tmp = this.soundBuffer.PlayPosition / 2;
// Generate new sounds from lastWritten to tmp in the sound buffer
if (tmp == lastWritten)
{
continue;
}
else
{
soundData = new short[(tmp - lastWritten + bufferLength) % bufferLength];
}
this.pullAudio(soundData, soundData.Length);
// Write in the generated data
soundBuffer.Write(lastWritten * 2, soundData, LockFlag.None);
// Save the position we were at
lastWritten = tmp;
}
}
}
public void Dispose()
{
this.running = false;
this.Resume();
if (this.soundBuffer != null)
{
this.soundBuffer.Dispose();
}
if (this.soundDevice != null)
{
this.soundDevice.Dispose();
}
}
}
}
The concept is the same that im using but i can't manage to get a set on wave byte [] data to play
I have not done this.
But the first place i would look is XNA.
I know that the c# managed directx project was ditched in favor of XNA and i have found it to be good for graphics - i prefer using it to directx.
what is the reason that you decided not to just use soundplayer, as per this msdn entry below?
private SoundPlayer Player = new SoundPlayer();
private void loadSoundAsync()
{
// Note: You may need to change the location specified based on
// the location of the sound to be played.
this.Player.SoundLocation = http://www.tailspintoys.com/sounds/stop.wav";
this.Player.LoadAsync();
}
private void Player_LoadCompleted (
object sender,
System.ComponentModel.AsyncCompletedEventArgs e)
{
if (this.Player.IsLoadCompleted)
{
this.Player.PlaySync();
}
}
usually i just load them all up in a thread, or asynch delegate, then play or playsynch them when needed.
You can use the DirectSound support in SlimDX: http://slimdx.org/ :-)
You can use nBASS or better FMOD both are great audio libraries and can work nicely together with .NET.
DirectSound is where you want to go. It's a piece of cake to use, but I'm not sure what formats it can play besides .wav
http://msdn.microsoft.com/en-us/library/windows/desktop/ee416960(v=vs.85).aspx

Categories