I have 6 audio sources which I need to play on 6 separate channels using ASIO.
I managed to get this working using WaveOutEvent as the output, but when I switch to AsioOut I get a null reference error, and I can't figure out what I'm doing wrong.
I need to use ASIO since I require 6 output channels and because I have to broadcast the audio over the network using Dante protocol.
The output device is a Dante Virtual Soundcard.
The error is:
NullReferenceException: Object reference not set to an instance of an object
NAudio.Wave.AsioOut.driver_BufferUpdate (System.IntPtr[] inputChannels, System.IntPtr[] outputChannels) (at <7b1c1a8badc0497bac142a81b5ef5bcf>:0)
NAudio.Wave.Asio.AsioDriverExt.BufferSwitchCallBack (System.Int32 doubleBufferIndex, System.Boolean directProcess) (at <7b1c1a8badc0497bac142a81b5ef5bcf>:0)
UnityEngine.<>c:<RegisterUECatcher>b__0_0(Object, UnhandledExceptionEventArgs)
This is the (simplified) code that plays the audio. The buffers are filled by other methods in external classes.
using NAudio.Wave;
using System;
using System.Collections.Generic;
public class AudioMultiplexer
{
MultiplexingWaveProvider multiplexer;
AsioOut asioOut;
public List<BufferedWaveProvider> buffers;
public int outputChannels = 6;
public int waveFormatSampleRate = 48000;
public int waveFormatBitDepth = 24;
public int waveFormatChannels = 2;
public void Start()
{
buffers = new List<BufferedWaveProvider>();
var outputFormat = new WaveFormat(waveFormatSampleRate, waveFormatBitDepth, waveFormatChannels);
for (int i = 0; i < outputChannels; i++)
{
var buffer = new BufferedWaveProvider(outputFormat);
buffer.DiscardOnBufferOverflow = true;
// Make sure the buffers are big enough, just in case
buffer.BufferDuration = TimeSpan.FromMinutes(5);
buffers.Add(buffer);
}
multiplexer = new MultiplexingWaveProvider(buffers, outputChannels);
for (int i = 0; i < outputChannels; i++)
{
// Each input has 2 channels, left & right, take only one channel from each input source
multiplexer.ConnectInputToOutput(i * 2, i);
}
var driverName = GetDanteDriverName();
if (string.IsNullOrEmpty(driverName))
{
return;
}
asioOut = new AsioOut(driverName);
asioOut.AudioAvailable += AsioOut_AudioAvailable;
asioOut.Init(multiplexer);
asioOut.Play();
}
private void AsioOut_AudioAvailable(object sender, AsioAudioAvailableEventArgs e)
{
// Do nothing for now
Console.WriteLine("Audio available");
}
private string GetDanteDriverName()
{
foreach (var driverName in AsioOut.GetDriverNames())
{
if (driverName.Contains("Dante Virtual Soundcard"))
{
return driverName;
}
}
return null;
}
private void OnDestroy()
{
asioOut.Stop();
asioOut.Dispose();
asioOut = null;
}
}
I may have a misunderstanding of how AsioOut works but I'm not sure where to start on this or how to debug the error.
You can use the Low-latency Multichannel Audio asset for unity to play multi-channel audio with asio. It's made especially for unity, unlike naudio, and it works without problems!
Related
Setup
Hey,
I'm trying to capture my screen and send/communicate the stream via MR-WebRTC. Communication between two PCs or PC with HoloLens worked with webcams for me, so I thought the next step could be streaming my screen. So I took the uwp application that I already had, which worked with my webcam and tried to make things work:
UWP App is based on the example uwp app from MR-WebRTC.
For Capturing I'm using the instruction from MS about screen capturing via GraphicsCapturePicker.
So now I'm stuck in the following situation:
I get a frame from the screen capturing, but its type is Direct3D11CaptureFrame. You can see it below in the code snipped.
MR-WebRTC takes a frame type I420AVideoFrame (also in a code snipped).
How can I "connect" them?
I420AVideoFrame wants a frame in the I420A format (YUV 4:2:0).
Configuring the framePool I can set the DirectXPixelFormat, but it has no YUV420.
I found this post on so, saying that it its possible.
Code Snipped Frame from Direct3D:
_framePool = Direct3D11CaptureFramePool.Create(
_canvasDevice, // D3D device
DirectXPixelFormat.B8G8R8A8UIntNormalized, // Pixel format
3, // Number of frames
_item.Size); // Size of the buffers
_session = _framePool.CreateCaptureSession(_item);
_session.StartCapture();
_framePool.FrameArrived += (s, a) =>
{
using (var frame = _framePool.TryGetNextFrame())
{
// Here I would take the Frame and call the MR-WebRTC method LocalI420AFrameReady
}
};
Code Snippet Frame from WebRTC:
// This is the way with the webcam; so LocalI420 was subscribed to
// the event I420AVideoFrameReady and got the frame from there
_webcamSource = await DeviceVideoTrackSource.CreateAsync();
_webcamSource.I420AVideoFrameReady += LocalI420AFrameReady;
// enqueueing the newly captured video frames into the bridge,
// which will later deliver them when the Media Foundation
// playback pipeline requests them.
private void LocalI420AFrameReady(I420AVideoFrame frame)
{
lock (_localVideoLock)
{
if (!_localVideoPlaying)
{
_localVideoPlaying = true;
// Capture the resolution into local variable useable from the lambda below
uint width = frame.width;
uint height = frame.height;
// Defer UI-related work to the main UI thread
RunOnMainThread(() =>
{
// Bridge the local video track with the local media player UI
int framerate = 30; // assumed, for lack of an actual value
_localVideoSource = CreateI420VideoStreamSource(
width, height, framerate);
var localVideoPlayer = new MediaPlayer();
localVideoPlayer.Source = MediaSource.CreateFromMediaStreamSource(
_localVideoSource);
localVideoPlayerElement.SetMediaPlayer(localVideoPlayer);
localVideoPlayer.Play();
});
}
}
// Enqueue the incoming frame into the video bridge; the media player will
// later dequeue it as soon as it's ready.
_localVideoBridge.HandleIncomingVideoFrame(frame);
}
I found a solution for my problem by creating an issue on the github repo. Answer was provided by KarthikRichie:
You have to use the ExternalVideoTrackSource
You can convert from the Direct3D11CaptureFrame to Argb32VideoFrame
// Setting up external video track source
_screenshareSource = ExternalVideoTrackSource.CreateFromArgb32Callback(FrameCallback);
struct WebRTCFrameData
{
public IntPtr Data;
public uint Height;
public uint Width;
public int Stride;
}
public void FrameCallback(in FrameRequest frameRequest)
{
try
{
if (FramePool != null)
{
using (Direct3D11CaptureFrame _currentFrame = FramePool.TryGetNextFrame())
{
if (_currentFrame != null)
{
WebRTCFrameData webRTCFrameData = ProcessBitmap(_currentFrame.Surface).Result;
frameRequest.CompleteRequest(new Argb32VideoFrame()
{
data = webRTCFrameData.Data,
height = webRTCFrameData.Height,
width = webRTCFrameData.Width,
stride = webRTCFrameData.Stride
});
}
}
}
}
catch (Exception ex)
{
}
}
private async Task<WebRTCFrameData> ProcessBitmap(IDirect3DSurface surface)
{
SoftwareBitmap softwareBitmap = await SoftwareBitmap.CreateCopyFromSurfaceAsync(surface, Windows.Graphics.Imaging.BitmapAlphaMode.Straight);
byte[] imageBytes = new byte[4 * softwareBitmap.PixelWidth * softwareBitmap.PixelHeight];
softwareBitmap.CopyToBuffer(imageBytes.AsBuffer());
WebRTCFrameData argb32VideoFrame = new WebRTCFrameData();
argb32VideoFrame.Data = GetByteIntPtr(imageBytes);
argb32VideoFrame.Height = (uint)softwareBitmap.PixelHeight;
argb32VideoFrame.Width = (uint)softwareBitmap.PixelWidth;
var test = softwareBitmap.LockBuffer(BitmapBufferAccessMode.Read);
int count = test.GetPlaneCount();
var pl = test.GetPlaneDescription(count - 1);
argb32VideoFrame.Stride = pl.Stride;
return argb32VideoFrame;
}
private IntPtr GetByteIntPtr(byte[] byteArr)
{
IntPtr intPtr2 = System.Runtime.InteropServices.Marshal.UnsafeAddrOfPinnedArrayElement(byteArr, 0);
return intPtr2;
}
I'm developping an audio application in C# and UWP using the AudioGraph API.
My AudioGraph setup is the following :
AudioFileInputNode --> AudioSubmixNode --> AudioDeviceOutputNode.
I attached a custom echo effect on the AudioSubmixNode.
If I play the AudioFileInputNode I can hear some echo.
But when the AudioFileInputNode playback finishes, the echo sound stops brutally.
I would like it to stop gradually after few seconds only.
If I use the EchoEffectDefinition from the AudioGraph API, the echo sound is not stopped after the sample playback has finished.
I don't know if the problem comes from my effect implementation or if it's a strange behavior of the AudioGraph API...
The behavior is the same in the "AudioCreation" sample in the SDK, scenario 6.
Here is my custom effect implementation :
public sealed class AudioEchoEffect : IBasicAudioEffect
{
public AudioEchoEffect()
{
}
private readonly AudioEncodingProperties[] _supportedEncodingProperties = new AudioEncodingProperties[]
{
AudioEncodingProperties.CreatePcm(44100, 1, 32),
AudioEncodingProperties.CreatePcm(48000, 1, 32),
};
private AudioEncodingProperties _currentEncodingProperties;
private IPropertySet _propertySet;
private readonly Queue<float> _echoBuffer = new Queue<float>(100000);
private int _delaySamplesCount;
private float Delay
{
get
{
if (_propertySet != null && _propertySet.TryGetValue("Delay", out object val))
{
return (float)val;
}
return 500.0f;
}
}
private float Feedback
{
get
{
if (_propertySet != null && _propertySet.TryGetValue("Feedback", out object val))
{
return (float)val;
}
return 0.5f;
}
}
private float Mix
{
get
{
if (_propertySet != null && _propertySet.TryGetValue("Mix", out object val))
{
return (float)val;
}
return 0.5f;
}
}
public bool UseInputFrameForOutput { get { return true; } }
public IReadOnlyList<AudioEncodingProperties> SupportedEncodingProperties { get { return _supportedEncodingProperties; } }
public void SetProperties(IPropertySet configuration)
{
_propertySet = configuration;
}
public void SetEncodingProperties(AudioEncodingProperties encodingProperties)
{
_currentEncodingProperties = encodingProperties;
// compute the number of samples for the delay
_delaySamplesCount = (int)MathF.Round((this.Delay / 1000.0f) * encodingProperties.SampleRate);
// fill empty samples in the buffer according to the delay
for (int i = 0; i < _delaySamplesCount; i++)
{
_echoBuffer.Enqueue(0.0f);
}
}
unsafe public void ProcessFrame(ProcessAudioFrameContext context)
{
AudioFrame frame = context.InputFrame;
using (AudioBuffer buffer = frame.LockBuffer(AudioBufferAccessMode.ReadWrite))
using (IMemoryBufferReference reference = buffer.CreateReference())
{
((IMemoryBufferByteAccess)reference).GetBuffer(out byte* dataInBytes, out uint capacity);
float* dataInFloat = (float*)dataInBytes;
int dataInFloatLength = (int)buffer.Length / sizeof(float);
// read parameters once
float currentWet = this.Mix;
float currentDry = 1.0f - currentWet;
float currentFeedback = this.Feedback;
// Process audio data
float sample, echoSample, outSample;
for (int i = 0; i < dataInFloatLength; i++)
{
// read values
sample = dataInFloat[i];
echoSample = _echoBuffer.Dequeue();
// compute output sample
outSample = (currentDry * sample) + (currentWet * echoSample);
dataInFloat[i] = outSample;
// compute delay sample
echoSample = sample + (currentFeedback * echoSample);
_echoBuffer.Enqueue(echoSample);
}
}
}
public void Close(MediaEffectClosedReason reason)
{
}
public void DiscardQueuedFrames()
{
// reset the delay buffer
_echoBuffer.Clear();
for (int i = 0; i < _delaySamplesCount; i++)
{
_echoBuffer.Enqueue(0.0f);
}
}
}
EDIT :
I changed my audio effect to mix the input samples with a sine wave. The ProcessFrame effect method runs continuously before and after the sample playback (when the effect is active). So the sine wave should be heared before and after the sample playback. But the AudioGraph API seems to ignore the effect output when there is no active playback...
Here is a screen capture of the audio output :
So my question is : How can the built-in EchoEffectDefinition output some sound after the playback finished ? An access to the EchoEffectDefinition source code would be a great help...
By infinitely looping the file input node, then it will always provide an input frame until the audio graph stops. But of course we do not want to hear the file loop, so we can listen the FileCompleted event of AudioFileInputNode. When the file finishes playing, it will trigger the event and we just need to set the OutgoingGain of AudioFileInputNode to zero. So the file playback once, but it continues to silently loop passing input frames that have no audio content to which the echo can be added.
Still using scenario 4 in the AudioCreation sample as an example. In the scenario4, there is a property named fileInputNode1. As mentioned above, please add the following code in fileInputNode1 and test again by using your custom echo effect.
fileInputNode1.LoopCount = null; //Null makes it loop infinitely
fileInputNode1.FileCompleted += FileInputNode1_FileCompleted;
private void FileInputNode1_FileCompleted(AudioFileInputNode sender, object args)
{
fileInputNode1.OutgoingGain = 0.0;
}
I'm making an audio player using XAudio2. We are streaming data in packets of 640 bytes, at a sample rate of 8000Hz and sample depth of 16 bytes. We are using SlimDX to access XAudio2.
But when playing sound, we are noticing that the sound quality is bad. This, for example, is a 3KHz sine curve, captured with Audacity.
I have condensed the audio player to the bare basics, but the audio quality is still bad. Is this a bug in XAudio2, SlimDX, or my code, or is this simply an artifact that occurs when one go from 8KHz to 44.1KHz? The last one seems unreasonable, as we also generate PCM wav files which are played perfectly by Windows Media Player.
The following is the basic implementation, which generates the broken Sine.
public partial class MainWindow : Window
{
private XAudio2 device = new XAudio2();
private WaveFormatExtensible format = new WaveFormatExtensible();
private SourceVoice sourceVoice = null;
private MasteringVoice masteringVoice = null;
private Guid KSDATAFORMAT_SUBTYPE_PCM = new Guid("00000001-0000-0010-8000-00aa00389b71");
private AutoResetEvent BufferReady = new AutoResetEvent(false);
private PlayBufferPool PlayBuffers = new PlayBufferPool();
public MainWindow()
{
InitializeComponent();
Closing += OnClosing;
format.Channels = 1;
format.BitsPerSample = 16;
format.FormatTag = WaveFormatTag.Extensible;
format.BlockAlignment = (short)(format.Channels * (format.BitsPerSample / 8));
format.SamplesPerSecond = 8000;
format.AverageBytesPerSecond = format.SamplesPerSecond * format.BlockAlignment;
format.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
}
private void OnClosing(object sender, CancelEventArgs cancelEventArgs)
{
sourceVoice.Stop();
sourceVoice.Dispose();
masteringVoice.Dispose();
PlayBuffers.Dispose();
}
private void button_Click(object sender, RoutedEventArgs e)
{
masteringVoice = new MasteringVoice(device);
PlayBuffer buffer = PlayBuffers.NextBuffer();
GenerateSine(buffer.Buffer);
buffer.AudioBuffer.AudioBytes = 640;
sourceVoice = new SourceVoice(device, format, VoiceFlags.None, 8);
sourceVoice.BufferStart += new EventHandler<ContextEventArgs>(sourceVoice_BufferStart);
sourceVoice.BufferEnd += new EventHandler<ContextEventArgs>(sourceVoice_BufferEnd);
sourceVoice.SubmitSourceBuffer(buffer.AudioBuffer);
sourceVoice.Start();
}
private void sourceVoice_BufferEnd(object sender, ContextEventArgs e)
{
BufferReady.Set();
}
private void sourceVoice_BufferStart(object sender, ContextEventArgs e)
{
BufferReady.WaitOne(1000);
PlayBuffer nextBuffer = PlayBuffers.NextBuffer();
nextBuffer.DataStream.Position = 0;
nextBuffer.AudioBuffer.AudioBytes = 640;
GenerateSine(nextBuffer.Buffer);
Result r = sourceVoice.SubmitSourceBuffer(nextBuffer.AudioBuffer);
}
private void GenerateSine(byte[] buffer)
{
double sampleRate = 8000.0;
double amplitude = 0.25 * short.MaxValue;
double frequency = 3000.0;
for (int n = 0; n < buffer.Length / 2; n++)
{
short[] s = { (short)(amplitude * Math.Sin((2 * Math.PI * n * frequency) / sampleRate)) };
Buffer.BlockCopy(s, 0, buffer, n * 2, 2);
}
}
}
public class PlayBuffer : IDisposable
{
#region Private variables
private IntPtr BufferPtr;
private GCHandle BufferHandle;
#endregion
#region Constructors
public PlayBuffer()
{
Index = 0;
Buffer = new byte[640 * 4]; // 640 = 30ms
BufferHandle = GCHandle.Alloc(this.Buffer, GCHandleType.Pinned);
BufferPtr = new IntPtr(BufferHandle.AddrOfPinnedObject().ToInt32());
DataStream = new DataStream(BufferPtr, 640 * 4, true, false);
AudioBuffer = new AudioBuffer();
AudioBuffer.AudioData = DataStream;
}
public PlayBuffer(int index)
: this()
{
Index = index;
}
#endregion
#region Destructor
~PlayBuffer()
{
Dispose();
}
#endregion
#region Properties
protected int Index { get; private set; }
public byte[] Buffer { get; private set; }
public DataStream DataStream { get; private set; }
public AudioBuffer AudioBuffer { get; private set; }
#endregion
#region Public functions
public void Dispose()
{
if (AudioBuffer != null)
{
AudioBuffer.Dispose();
AudioBuffer = null;
}
if (DataStream != null)
{
DataStream.Dispose();
DataStream = null;
}
}
#endregion
}
public class PlayBufferPool : IDisposable
{
#region Private variables
private int _currentIndex = -1;
private PlayBuffer[] _buffers = new PlayBuffer[2];
#endregion
#region Constructors
public PlayBufferPool()
{
for (int i = 0; i < 2; i++)
Buffers[i] = new PlayBuffer(i);
}
#endregion
#region Desctructor
~PlayBufferPool()
{
Dispose();
}
#endregion
#region Properties
protected int CurrentIndex
{
get { return _currentIndex; }
set { _currentIndex = value; }
}
protected PlayBuffer[] Buffers
{
get { return _buffers; }
set { _buffers = value; }
}
#endregion
#region Public functions
public void Dispose()
{
for (int i = 0; i < Buffers.Length; i++)
{
if (Buffers[i] == null)
continue;
Buffers[i].Dispose();
Buffers[i] = null;
}
}
public PlayBuffer NextBuffer()
{
CurrentIndex = (CurrentIndex + 1) % Buffers.Length;
return Buffers[CurrentIndex];
}
#endregion
}
Some extra details:
This is used to replay recorded voice with various compression such as ALAW, µLAW or TrueSpeech. The data is sent in small packets, decoded and sent to this player. This is the reason for why we're using so low sampling rate, and so small buffers.
There are no problems with our data, however, as generating a WAV file with the data results in perfect replay by WMP or VLC.
edit: We have now "solved" this by rewriting the player in NAudio.
I'd still be interested in any input as to what is happening here. Is it our approach in the PlayBuffers, or is it simply a bug/limitation in DirectX, or the wrappers? I tried using SharpDX instead of SlimDX, but that did not change the result anything.
It looks as if the upsampling is done without a proper anti-aliasing (reconstruction) filter. The cutoff frequency is far too high (above the original Nyquist frequency) and therefore a lot of the aliases are being preserved, resulting in output resembling piecewise-linear interpolation between the samples taken at 8000 Hz.
Although all your different options are doing an upconversion from 8kHz to 44.1kHz, the way in which they do that is important, and the fact that one library does it well is no proof that the upconversion is not the source of error in the other.
It's been a while since I worked with sound and frequencies, but here is what I remember: You have a sample rate of 8000Hz and want a sine frequency of 3000Hz. So for 1 second you have 8000 samples and in that second you want your sine to oscillate 3000 times. That is below the Nyquist-frequency (half your sample rate) but barely (see Nyquist–Shannon sampling theorem). So I would not expect a good quality here.
In fact: step through the GenerateSine-method and you'll see that s[0] will contain the values 0, 5792, -8191, 5792, 0, -5792, 8191, -5792, 0, 5792...
None the less this doesn't explain the odd sine you recorded back and I'm not sure how much samples the human ear need to hear a "good" sine wave.
at the moment im trying to figure out how i can manage to play a wave file in C# by filling up the secondary buffer with data from the wave file through threading and then play the wave file.
Any help or sample coding i can use?
thanks
sample code being used:
public delegate void PullAudio(short[] buffer, int length);
public class SoundPlayer : IDisposable
{
private Device soundDevice;
private SecondaryBuffer soundBuffer;
private int samplesPerUpdate;
private AutoResetEvent[] fillEvent = new AutoResetEvent[2];
private Thread thread;
private PullAudio pullAudio;
private short channels;
private bool halted;
private bool running;
public SoundPlayer(Control owner, PullAudio pullAudio, short channels)
{
this.channels = channels;
this.pullAudio = pullAudio;
this.soundDevice = new Device();
this.soundDevice.SetCooperativeLevel(owner, CooperativeLevel.Priority);
// Set up our wave format to 44,100Hz, with 16 bit resolution
WaveFormat wf = new WaveFormat();
wf.FormatTag = WaveFormatTag.Pcm;
wf.SamplesPerSecond = 44100;
wf.BitsPerSample = 16;
wf.Channels = channels;
wf.BlockAlign = (short)(wf.Channels * wf.BitsPerSample / 8);
wf.AverageBytesPerSecond = wf.SamplesPerSecond * wf.BlockAlign;
this.samplesPerUpdate = 512;
// Create a buffer with 2 seconds of sample data
BufferDescription bufferDesc = new BufferDescription(wf);
bufferDesc.BufferBytes = this.samplesPerUpdate * wf.BlockAlign * 2;
bufferDesc.ControlPositionNotify = true;
bufferDesc.GlobalFocus = true;
this.soundBuffer = new SecondaryBuffer(bufferDesc, this.soundDevice);
Notify notify = new Notify(this.soundBuffer);
fillEvent[0] = new AutoResetEvent(false);
fillEvent[1] = new AutoResetEvent(false);
// Set up two notification events, one at halfway, and one at the end of the buffer
BufferPositionNotify[] posNotify = new BufferPositionNotify[2];
posNotify[0] = new BufferPositionNotify();
posNotify[0].Offset = bufferDesc.BufferBytes / 2 - 1;
posNotify[0].EventNotifyHandle = fillEvent[0].Handle;
posNotify[1] = new BufferPositionNotify();
posNotify[1].Offset = bufferDesc.BufferBytes - 1;
posNotify[1].EventNotifyHandle = fillEvent[1].Handle;
notify.SetNotificationPositions(posNotify);
this.thread = new Thread(new ThreadStart(SoundPlayback));
this.thread.Priority = ThreadPriority.Highest;
this.Pause();
this.running = true;
this.thread.Start();
}
public void Pause()
{
if (this.halted) return;
this.halted = true;
Monitor.Enter(this.thread);
}
public void Resume()
{
if (!this.halted) return;
this.halted = false;
Monitor.Pulse(this.thread);
Monitor.Exit(this.thread);
}
private void SoundPlayback()
{
lock (this.thread)
{
if (!this.running) return;
// Set up the initial sound buffer to be the full length
int bufferLength = this.samplesPerUpdate * 2 * this.channels;
short[] soundData = new short[bufferLength];
// Prime it with the first x seconds of data
this.pullAudio(soundData, soundData.Length);
this.soundBuffer.Write(0, soundData, LockFlag.None);
// Start it playing
this.soundBuffer.Play(0, BufferPlayFlags.Looping);
int lastWritten = 0;
while (this.running)
{
if (this.halted)
{
Monitor.Pulse(this.thread);
Monitor.Wait(this.thread);
}
// Wait on one of the notification events
WaitHandle.WaitAny(this.fillEvent, 3, true);
// Get the current play position (divide by two because we are using 16 bit samples)
int tmp = this.soundBuffer.PlayPosition / 2;
// Generate new sounds from lastWritten to tmp in the sound buffer
if (tmp == lastWritten)
{
continue;
}
else
{
soundData = new short[(tmp - lastWritten + bufferLength) % bufferLength];
}
this.pullAudio(soundData, soundData.Length);
// Write in the generated data
soundBuffer.Write(lastWritten * 2, soundData, LockFlag.None);
// Save the position we were at
lastWritten = tmp;
}
}
}
public void Dispose()
{
this.running = false;
this.Resume();
if (this.soundBuffer != null)
{
this.soundBuffer.Dispose();
}
if (this.soundDevice != null)
{
this.soundDevice.Dispose();
}
}
}
}
The concept is the same that im using but i can't manage to get a set on wave byte [] data to play
I have not done this.
But the first place i would look is XNA.
I know that the c# managed directx project was ditched in favor of XNA and i have found it to be good for graphics - i prefer using it to directx.
what is the reason that you decided not to just use soundplayer, as per this msdn entry below?
private SoundPlayer Player = new SoundPlayer();
private void loadSoundAsync()
{
// Note: You may need to change the location specified based on
// the location of the sound to be played.
this.Player.SoundLocation = http://www.tailspintoys.com/sounds/stop.wav";
this.Player.LoadAsync();
}
private void Player_LoadCompleted (
object sender,
System.ComponentModel.AsyncCompletedEventArgs e)
{
if (this.Player.IsLoadCompleted)
{
this.Player.PlaySync();
}
}
usually i just load them all up in a thread, or asynch delegate, then play or playsynch them when needed.
You can use the DirectSound support in SlimDX: http://slimdx.org/ :-)
You can use nBASS or better FMOD both are great audio libraries and can work nicely together with .NET.
DirectSound is where you want to go. It's a piece of cake to use, but I'm not sure what formats it can play besides .wav
http://msdn.microsoft.com/en-us/library/windows/desktop/ee416960(v=vs.85).aspx
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I would like to create a sound visualisation system using C# language and .NET Framework.
This could look like in Winamp application.
Maybe exist free library or some interesting articles which describe how to do it?
Example:
alt text http://img44.imageshack.us/img44/9982/examplel.png
You can try these links
OpenVP (is a free and open-source platform for developing music visualizations, written in C#), see the OpenVP Screenshots.
Sound visualizer in C#
Play and Visualize WAV Files using Managed Direct Sound
Bye.
Here's a script that computes the FFT of any sound played on the computer using the WASAPI API. It uses CSCore and its WinformsVisualization example:
using CSCore;
using CSCore.SoundIn;
using CSCore.Codecs.WAV;
using WinformsVisualization.Visualization;
using CSCore.DSP;
using CSCore.Streams;
using System;
public class SoundCapture
{
public int numBars = 30;
public int minFreq = 5;
public int maxFreq = 4500;
public int barSpacing = 0;
public bool logScale = true;
public bool isAverage = false;
public float highScaleAverage = 2.0f;
public float highScaleNotAverage = 3.0f;
LineSpectrum lineSpectrum;
WasapiCapture capture;
WaveWriter writer;
FftSize fftSize;
float[] fftBuffer;
SingleBlockNotificationStream notificationSource;
BasicSpectrumProvider spectrumProvider;
IWaveSource finalSource;
public SoundCapture()
{
// This uses the wasapi api to get any sound data played by the computer
capture = new WasapiLoopbackCapture();
capture.Initialize();
// Get our capture as a source
IWaveSource source = new SoundInSource(capture);
// From https://github.com/filoe/cscore/blob/master/Samples/WinformsVisualization/Form1.cs
// This is the typical size, you can change this for higher detail as needed
fftSize = FftSize.Fft4096;
// Actual fft data
fftBuffer = new float[(int)fftSize];
// These are the actual classes that give you spectrum data
// The specific vars of lineSpectrum here aren't that important because they can be changed by the user
spectrumProvider = new BasicSpectrumProvider(capture.WaveFormat.Channels,
capture.WaveFormat.SampleRate, fftSize);
lineSpectrum = new LineSpectrum(fftSize)
{
SpectrumProvider = spectrumProvider,
UseAverage = true,
BarCount = numBars,
BarSpacing = 2,
IsXLogScale = false,
ScalingStrategy = ScalingStrategy.Linear
};
// Tells us when data is available to send to our spectrum
var notificationSource = new SingleBlockNotificationStream(source.ToSampleSource());
notificationSource.SingleBlockRead += NotificationSource_SingleBlockRead;
// We use this to request data so it actualy flows through (figuring this out took forever...)
finalSource = notificationSource.ToWaveSource();
capture.DataAvailable += Capture_DataAvailable;
capture.Start();
}
private void Capture_DataAvailable(object sender, DataAvailableEventArgs e)
{
finalSource.Read(e.Data, e.Offset, e.ByteCount);
}
private void NotificationSource_SingleBlockRead(object sender, SingleBlockReadEventArgs e)
{
spectrumProvider.Add(e.Left, e.Right);
}
~SoundCapture()
{
capture.Stop();
capture.Dispose();
}
public float[] barData = new float[20];
public float[] GetFFtData()
{
lock (barData)
{
lineSpectrum.BarCount = numBars;
if (numBars != barData.Length)
{
barData = new float[numBars];
}
}
if (spectrumProvider.IsNewDataAvailable)
{
lineSpectrum.MinimumFrequency = minFreq;
lineSpectrum.MaximumFrequency = maxFreq;
lineSpectrum.IsXLogScale = logScale;
lineSpectrum.BarSpacing = barSpacing;
lineSpectrum.SpectrumProvider.GetFftData(fftBuffer, this);
return lineSpectrum.GetSpectrumPoints(100.0f, fftBuffer);
}
else
{
return null;
}
}
public void ComputeData()
{
float[] resData = GetFFtData();
int numBars = barData.Length;
if (resData == null)
{
return;
}
lock (barData)
{
for (int i = 0; i < numBars && i < resData.Length; i++)
{
// Make the data between 0.0 and 1.0
barData[i] = resData[i] / 100.0f;
}
for (int i = 0; i < numBars && i < resData.Length; i++)
{
if (lineSpectrum.UseAverage)
{
// Scale the data because for some reason bass is always loud and treble is soft
barData[i] = barData[i] + highScaleAverage * (float)Math.Sqrt(i / (numBars + 0.0f)) * barData[i];
}
else
{
barData[i] = barData[i] + highScaleNotAverage * (float)Math.Sqrt(i / (numBars + 0.0f)) * barData[i];
}
}
}
}
}
Then when retrieving the barData from a different script it's recommended to lock it first since this is modified on a separate thread.
I'm not sure where I got GetSpectrumPoints since it doesn't seem to be in the Github Repo, but here it is. Just paste this into that file and my code should work.
public float[] GetSpectrumPoints(float height, float[] fftBuffer)
{
SpectrumPointData[] dats = CalculateSpectrumPoints(height, fftBuffer);
float[] res = new float[dats.Length];
for (int i = 0; i < dats.Length; i++)
{
res[i] = (float)dats[i].Value;
}
return res;
}