I just started Kinect programming and I am quite happy to have been able to display RGB and IR images at the same time.
Now using the screenshot button I am able to save each frame when I want. (same procedure as in the sample SDKs)
So now if I want to continuously save those frames how can I go about doing that?
I am new to C# and Kinect programming general. So can anyone help me?
Thanks;
just try:
private unsafe void saveFrame(Object reference)
{
MultiSourceFrame mSF = (MultiSourceFrame)reference;
using (var frame = mSF.DepthFrameReference.AcquireFrame())
{
if (frame != null)
{
using (Microsoft.Kinect.KinectBuffer depthBuffer = frame.LockImageBuffer())
{
if ((frame.FrameDescription.Width * frame.FrameDescription.Height) == (depthBuffer.Size / frame.FrameDescription.BytesPerPixel))
{
ushort* frameData = (ushort*)depthBuffer.UnderlyingBuffer;
byte[] rawDataConverted = new byte[(int)(depthBuffer.Size / 2)];
for (int i = 0; i < (int)(depthBuffer.Size / 2); ++i)
{
ushort depth = frameData[i];
rawDataConverted[i] = (byte)(depth >= frame.DepthMinReliableDistance && depth <= frame.DepthMaxReliableDistance ? (depth) : 0);
}
String date = string.Format("{0:hh-mm-ss}", DateTime.Now);
String filePath = System.IO.Directory.GetCurrentDirectory() + "/test/" +date+".raw";
File.WriteAllBytes(filePath, rawDataConverted);
rawDataConverted = null;
}
}
}
}
}
You also take a look here:
Saving raw detph-data
Related
I’m developing a UWP application ( for Windows 10) which works with audio data. It receives samples buffer at the start in the form of a float array of samples, which items are changing from -1f to 1f.
Earlier I used NAudio.dll 1.8.0 that gives all necessary functionality.
Worked with WaveFileReader, waveBuffer.FloatBuffer, WaveFileWriter classes.
However, when I finished this app and tried to build Release version, got this error:
ILT0042: Arrays of pointer types are not currently supported: 'System.Int32*[]'.
I’ve tried to solve it:
https://forums.xamarin.com/discussion/73169/uwp-10-build-fail-arrays-of-pointer-types-error
There is advice to remove the link to .dll, but I need it.
I’ve tried to install NAudio the same version using Manage NuGet Packages, but WaveFileReader, WaveFileWriter is not available.
In NAudio developer’s answer (How to store a .wav file in Windows 10 with NAudio) I’ve read about using AudioGraph, but I can build float array of samples only in the realtime playback, but I need get the full samples to pack right after the audio file uploading. Example of getting samples during the recording process or playback:
https://learn.microsoft.com/ru-ru/windows/uwp/audio-video-camera/audio-graphs
That’s why I need help: how to get FloatBuffer for working with samples after audio file uploading? For example, for building audio waves or calculation for audio effects applying.
Thank you in advance.
I’ve tried to use FileStream and BitConverter.ToSingle(), however, I had a different result compared to NAudio.
In other words, I’m still looking for a solution.
private float[] GetBufferArray()
{
string _path = ApplicationData.Current.LocalFolder.Path.ToString() + "/track_1.mp3";
FileStream _stream = new FileStream(_path, FileMode.Open);
BinaryReader _binaryReader = new BinaryReader(_stream);
int _dataSize = _binaryReader.ReadInt32();
byte[] _byteBuffer = _binaryReader.ReadBytes(_dataSize);
int _sizeFloat = sizeof(float);
float[] _floatBuffer = new float[_byteBuffer.Length / _sizeFloat];
for (int i = 0, j = 0; i < _byteBuffer.Length - _sizeFloat; i += _sizeFloat, j++)
{
_floatBuffer[j] = BitConverter.ToSingle(_byteBuffer, i);
}
return _floatBuffer;
}
Another way to read samples from an audio file in UWP is using AudioGraph API. It will work for all audio formats that Windows10 supports
Here is a sample code
namespace AudioGraphAPI_read_samples_from_file
{
// App opens a file using FileOpenPicker and reads samples into array of
// floats using AudioGragh API
// Declare COM interface to access AudioBuffer
[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}
public sealed partial class MainPage : Page
{
StorageFile mediaFile;
AudioGraph audioGraph;
AudioFileInputNode fileInputNode;
AudioFrameOutputNode frameOutputNode;
/// <summary>
/// We are going to fill this array with audio samples
/// This app loads only one channel
/// </summary>
float[] audioData;
/// <summary>
/// Current position in audioData array for loading audio samples
/// </summary>
int audioDataCurrentPosition = 0;
public MainPage()
{
this.InitializeComponent();
}
private async void Open_Button_Click(object sender, RoutedEventArgs e)
{
// We ask user to pick an audio file
FileOpenPicker filePicker = new FileOpenPicker();
filePicker.SuggestedStartLocation = PickerLocationId.MusicLibrary;
filePicker.FileTypeFilter.Add(".mp3");
filePicker.FileTypeFilter.Add(".wav");
filePicker.FileTypeFilter.Add(".wma");
filePicker.FileTypeFilter.Add(".m4a");
filePicker.ViewMode = PickerViewMode.Thumbnail;
mediaFile = await filePicker.PickSingleFileAsync();
if (mediaFile == null)
{
return;
}
// We load samples from file
await LoadAudioFromFile(mediaFile);
// We wait 5 sec
await Task.Delay(5000);
if (audioData == null)
{
ShowMessage("Error loading samples");
return;
}
// After LoadAudioFromFile method finished we can use audioData
// For example we can find max amplitude
float max = audioData[0];
for (int i = 1; i < audioData.Length; i++)
if (Math.Abs(audioData[i]) > Math.Abs(max))
max = audioData[i];
ShowMessage("Maximum is " + max.ToString());
}
private async void ShowMessage(string Message)
{
var dialog = new MessageDialog(Message);
await dialog.ShowAsync();
}
private async Task LoadAudioFromFile(StorageFile file)
{
// We initialize an instance of AudioGraph
AudioGraphSettings settings =
new AudioGraphSettings(
Windows.Media.Render.AudioRenderCategory.Media
);
CreateAudioGraphResult result1 = await AudioGraph.CreateAsync(settings);
if (result1.Status != AudioGraphCreationStatus.Success)
{
ShowMessage("AudioGraph creation error: " + result1.Status.ToString());
}
audioGraph = result1.Graph;
if (audioGraph == null)
return;
// We initialize FileInputNode
CreateAudioFileInputNodeResult result2 =
await audioGraph.CreateFileInputNodeAsync(file);
if (result2.Status != AudioFileNodeCreationStatus.Success)
{
ShowMessage("FileInputNode creation error: " + result2.Status.ToString());
}
fileInputNode = result2.FileInputNode;
if (fileInputNode == null)
return;
// We read audio file encoding properties to pass them to FrameOutputNode creator
AudioEncodingProperties audioEncodingProperties = fileInputNode.EncodingProperties;
// We initialize FrameOutputNode and connect it to fileInputNode
frameOutputNode = audioGraph.CreateFrameOutputNode(audioEncodingProperties);
fileInputNode.AddOutgoingConnection(frameOutputNode);
// We add a handler achiving the end of a file
fileInputNode.FileCompleted += FileInput_FileCompleted;
// We add a handler which will transfer every audio frame into audioData
audioGraph.QuantumStarted += AudioGraph_QuantumStarted;
// We initialize audioData
int numOfSamples = (int)Math.Ceiling(
(decimal)0.0000001
* fileInputNode.Duration.Ticks
* fileInputNode.EncodingProperties.SampleRate
);
audioData = new float[numOfSamples];
audioDataCurrentPosition = 0;
// We start process which will read audio file frame by frame
// and will generated events QuantumStarted when a frame is in memory
audioGraph.Start();
}
private void FileInput_FileCompleted(AudioFileInputNode sender, object args)
{
audioGraph.Stop();
}
private void AudioGraph_QuantumStarted(AudioGraph sender, object args)
{
AudioFrame frame = frameOutputNode.GetFrame();
ProcessInputFrame(frame);
}
unsafe private void ProcessInputFrame(AudioFrame frame)
{
using (AudioBuffer buffer = frame.LockBuffer(AudioBufferAccessMode.Read))
using (IMemoryBufferReference reference = buffer.CreateReference())
{
// We get data from current buffer
((IMemoryBufferByteAccess)reference).GetBuffer(
out byte* dataInBytes,
out uint capacityInBytes
);
// We discard first frame; it's full of zeros because of latency
if (audioGraph.CompletedQuantumCount == 1) return;
float* dataInFloat = (float*)dataInBytes;
uint capacityInFloat = capacityInBytes / sizeof(float);
// Number of channels defines step between samples in buffer
uint step = fileInputNode.EncodingProperties.ChannelCount;
// We transfer audio samples from buffer into audioData
for (uint i = 0; i < capacityInFloat; i += step)
{
if (audioDataCurrentPosition < audioData.Length)
{
audioData[audioDataCurrentPosition] = dataInFloat[i];
audioDataCurrentPosition++;
}
}
}
}
}
}
Edited: It solves the problem because it reads samples from a file into a float array
First popular way of getting AudioData from Wav file.
Thanks to PI user’s answer How to read the data in a wav file to an array, I’ve solved the problem with wav file reading in float array in UWP project.
But file’s structure differs from standard one (maybe, only in my project there is such problem) when it records in wav file using AudioGraph. It leads to unpredictable result. We receive value1263424842 instead of predictable 544501094 getting format id. After that, all following values are displayed incorrectly. I’ve found out the correct id sequentially searching in the bytes. I realised that AudioGraph adds extra chunk of data to recorded wav file, but record’s format is still PCM. This extra chunk of data looks like the data about file format, but it contains also empty values, empty bytes. I can’t find any information about that, maybe somebody here knows? The solution from PI I’ve changed for my needs. That’s what I’ve got:
using (FileStream fs = File.Open(filename, FileMode.Open))
{
BinaryReader reader = new BinaryReader(fs);
int chunkID = reader.ReadInt32();
int fileSize = reader.ReadInt32();
int riffType = reader.ReadInt32();
int fmtID;
long _position = reader.BaseStream.Position;
while (_position != reader.BaseStream.Length-1)
{
reader.BaseStream.Position = _position;
int _fmtId = reader.ReadInt32();
if (_fmtId == 544501094) {
fmtID = _fmtId;
break;
}
_position++;
}
int fmtSize = reader.ReadInt32();
int fmtCode = reader.ReadInt16();
int channels = reader.ReadInt16();
int sampleRate = reader.ReadInt32();
int byteRate = reader.ReadInt32();
int fmtBlockAlign = reader.ReadInt16();
int bitDepth = reader.ReadInt16();
int fmtExtraSize;
if (fmtSize == 18)
{
fmtExtraSize = reader.ReadInt16();
reader.ReadBytes(fmtExtraSize);
}
int dataID = reader.ReadInt32();
int dataSize = reader.ReadInt32();
byte[] byteArray = reader.ReadBytes(dataSize);
int bytesForSamp = bitDepth / 8;
int samps = dataSize / bytesForSamp;
float[] asFloat = null;
switch (bitDepth)
{
case 16:
Int16[] asInt16 = new Int16[samps];
Buffer.BlockCopy(byteArray, 0, asInt16, 0, dataSize);
IEnumerable<float> tempInt16 =
from i in asInt16
select i / (float)Int16.MaxValue;
asFloat = tempInt16.ToArray();
break;
default:
return false;
}
//For one channel wav audio
floatLeftBuffer.AddRange(asFloat);
From buffer to file record has inverse algorithm. At this moment this is the only one correct algorithm for working with wav files which allows to get audio data.
Used this article working with AudioGraph - https://learn.microsoft.com/ru-ru/windows/uwp/audio-video-camera/audio-graphs. Note that you can set up necessary data of record’s format with AudioEncodingQuality recirdung from MIC to file.
Second way of getting AudioData using NAudio from Nugget Packages.
I used MediaFoundationReader class.
float[] floatBuffer;
using (MediaFoundationReader media = new MediaFoundationReader(path))
{
int _byteBuffer32_length = (int)media.Length * 2;
int _floatBuffer_length = _byteBuffer32_length / sizeof(float);
IWaveProvider stream32 = new Wave16ToFloatProvider(media);
WaveBuffer _waveBuffer = new WaveBuffer(_byteBuffer32_length);
stream32.Read(_waveBuffer, 0, (int)_byteBuffer32_length);
floatBuffer = new float[_floatBuffer_length];
for (int i = 0; i < _floatBuffer_length; i++) {
floatBuffer[i] = _waveBuffer.FloatBuffer[i];
}
}
Comparing two ways I noticed:
Received values of samples differ on 1/1 000 000. I can’t say what way is more precise (if you know, will be glad to hear);
Second way of getting AudioData works for MP3 files, too.
If you’ve found any mistakes or have comments about that, welcome.
Import statement
using NAudio.Wave;
using NAudio.Wave.SampleProviders;
Inside function
AudioFileReader reader = new AudioFileReader(filename);
ISampleProvider isp = reader.ToSampleProvider();
float[] buffer = new float[reader.Length / 2];
isp.Read(buffer, 0, buffer.Length);
buffer array will be having 32 bit IEEE float samples.
This is using NAudio Nuget Package Visual Studio.
I am creating a program that can dump individual frames from a video feed for a drones competition. I am having a problem where by the wireless video stream coming form the drone is flickering and flying all over the place.
I am using this code to capture the video stream:
Capture _capture;
Emgu.CV.Image<Emgu.CV.Structure.Bgr,byte> frame;
void StartCamera()
{
_capture = null;
_capture = new Capture((int)nudCamera.Value);
_capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FPS, FrameRate);
_capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, FrameHeight);
_capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, FrameWidth);
webcam_frm_cnt = 0;
cam = 1;
Video_seek = 0;
System.Windows.Forms.Application.Idle += ProcessFrame;
}
private void ProcessFrame(object sender, EventArgs arg)
{
try
{
Framesno = _capture.GetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_POS_FRAMES);
frame = _capture.QueryFrame();
if (frame != null)
{
pictureBox1.Image = frame.ToBitmap();
if (cam == 0)
{
Video_seek = (int)(Framesno);
double time_index = _capture.GetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_POS_MSEC);
//Time_Label.Text = "Time: " + TimeSpan.FromMilliseconds(time_index).ToString().Substring(0, 8);
double framenumber = _capture.GetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_POS_FRAMES);
//Frame_lbl.Text = "Frame: " + framenumber.ToString();
Thread.Sleep((int)(1000.0 / FrameRate));
}
if (cam == 1)
{
//Frame_lbl.Text = "Frame: " + (webcam_frm_cnt++).ToString();
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message.ToString());
}
}
Is there a setting somewhere that I am missing?
This video stream flickering seems to happen in other programs too however it fixes itself when you fiddle with the video setting (NTSC/PAL settings)
Edit: So I need to be able to put the video stream into NTSC /M mode, is this possible with EmguCV? If so how do I do it?
Edit 2: All documents that I have read point towards it being completely and utterly impossible to change the video type and that there is no documentation on this topic. I would love to be proved wrong :)
Thanks in advanced
This sourceforge link tells you how to set the video mode.
Whole day I was looking for some tutorial or piece of code, "just" to play simple sin wave for "infinity" time. I know it sounds a little crazy.
But I want to be able to change frequency of tone in time, for instance - increase it.
Imagine that I want to play tone A, and increase it to C in "+5" frequency steps each 3ms (it's really just example), don't want to have free places, stop the tone.
Is it possible? Or can you help me?
Use NAudio library for audio output.
Make notes wave provider:
class NotesWaveProvider : WaveProvider32
{
public NotesWaveProvider(Queue<Note> notes)
{
this.Notes = notes;
}
public readonly Queue<Note> Notes;
int sample = 0;
Note NextNote()
{
for (; ; )
{
if (Notes.Count == 0)
return null;
var note = Notes.Peek();
if (sample < note.Duration.TotalSeconds * WaveFormat.SampleRate)
return note;
Notes.Dequeue();
sample = 0;
}
}
public override int Read(float[] buffer, int offset, int sampleCount)
{
int sampleRate = WaveFormat.SampleRate;
for (int n = 0; n < sampleCount; n++)
{
var note = NextNote();
if (note == null)
buffer[n + offset] = 0;
else
buffer[n + offset] = (float)(note.Amplitude * Math.Sin((2 * Math.PI * sample * note.Frequency) / sampleRate));
sample++;
}
return sampleCount;
}
}
class Note
{
public float Frequency;
public float Amplitude = 1.0f;
public TimeSpan Duration = TimeSpan.FromMilliseconds(50);
}
start play:
WaveOut waveOut;
this.Notes = new Queue<Note>(new[] { new Note { Frequency = 1000 }, new Note { Frequency = 1100 } });
var waveProvider = new NotesWaveProvider(Notes);
waveProvider.SetWaveFormat(16000, 1); // 16kHz mono
waveOut = new WaveOut();
waveOut.Init(waveProvider);
waveOut.Play();
add new notes:
void Timer_Tick(...)
{
if (Notes.Count < 10)
Notes.Add(new Note{Frecuency = 900});
}
ps this code is idea only. for real using add mt-locking etc
use NAudio and SineWaveProvider32: http://mark-dot-net.blogspot.com/2009/10/playback-of-sine-wave-in-naudio.html
private WaveOut waveOut;
private void button1_Click(object sender, EventArgs e)
{
StartStopSineWave();
}
private void StartStopSineWave()
{
if (waveOut == null)
{
var sineWaveProvider = new SineWaveProvider32();
sineWaveProvider.SetWaveFormat(16000, 1); // 16kHz mono
sineWaveProvider.Frequency = 1000;
sineWaveProvider.Amplitude = 0.25f;
waveOut = new WaveOut();
waveOut.Init(sineWaveProvider);
waveOut.Play();
}
else
{
waveOut.Stop();
waveOut.Dispose();
waveOut = null;
}
}
at the moment im trying to figure out how i can manage to play a wave file in C# by filling up the secondary buffer with data from the wave file through threading and then play the wave file.
Any help or sample coding i can use?
thanks
sample code being used:
public delegate void PullAudio(short[] buffer, int length);
public class SoundPlayer : IDisposable
{
private Device soundDevice;
private SecondaryBuffer soundBuffer;
private int samplesPerUpdate;
private AutoResetEvent[] fillEvent = new AutoResetEvent[2];
private Thread thread;
private PullAudio pullAudio;
private short channels;
private bool halted;
private bool running;
public SoundPlayer(Control owner, PullAudio pullAudio, short channels)
{
this.channels = channels;
this.pullAudio = pullAudio;
this.soundDevice = new Device();
this.soundDevice.SetCooperativeLevel(owner, CooperativeLevel.Priority);
// Set up our wave format to 44,100Hz, with 16 bit resolution
WaveFormat wf = new WaveFormat();
wf.FormatTag = WaveFormatTag.Pcm;
wf.SamplesPerSecond = 44100;
wf.BitsPerSample = 16;
wf.Channels = channels;
wf.BlockAlign = (short)(wf.Channels * wf.BitsPerSample / 8);
wf.AverageBytesPerSecond = wf.SamplesPerSecond * wf.BlockAlign;
this.samplesPerUpdate = 512;
// Create a buffer with 2 seconds of sample data
BufferDescription bufferDesc = new BufferDescription(wf);
bufferDesc.BufferBytes = this.samplesPerUpdate * wf.BlockAlign * 2;
bufferDesc.ControlPositionNotify = true;
bufferDesc.GlobalFocus = true;
this.soundBuffer = new SecondaryBuffer(bufferDesc, this.soundDevice);
Notify notify = new Notify(this.soundBuffer);
fillEvent[0] = new AutoResetEvent(false);
fillEvent[1] = new AutoResetEvent(false);
// Set up two notification events, one at halfway, and one at the end of the buffer
BufferPositionNotify[] posNotify = new BufferPositionNotify[2];
posNotify[0] = new BufferPositionNotify();
posNotify[0].Offset = bufferDesc.BufferBytes / 2 - 1;
posNotify[0].EventNotifyHandle = fillEvent[0].Handle;
posNotify[1] = new BufferPositionNotify();
posNotify[1].Offset = bufferDesc.BufferBytes - 1;
posNotify[1].EventNotifyHandle = fillEvent[1].Handle;
notify.SetNotificationPositions(posNotify);
this.thread = new Thread(new ThreadStart(SoundPlayback));
this.thread.Priority = ThreadPriority.Highest;
this.Pause();
this.running = true;
this.thread.Start();
}
public void Pause()
{
if (this.halted) return;
this.halted = true;
Monitor.Enter(this.thread);
}
public void Resume()
{
if (!this.halted) return;
this.halted = false;
Monitor.Pulse(this.thread);
Monitor.Exit(this.thread);
}
private void SoundPlayback()
{
lock (this.thread)
{
if (!this.running) return;
// Set up the initial sound buffer to be the full length
int bufferLength = this.samplesPerUpdate * 2 * this.channels;
short[] soundData = new short[bufferLength];
// Prime it with the first x seconds of data
this.pullAudio(soundData, soundData.Length);
this.soundBuffer.Write(0, soundData, LockFlag.None);
// Start it playing
this.soundBuffer.Play(0, BufferPlayFlags.Looping);
int lastWritten = 0;
while (this.running)
{
if (this.halted)
{
Monitor.Pulse(this.thread);
Monitor.Wait(this.thread);
}
// Wait on one of the notification events
WaitHandle.WaitAny(this.fillEvent, 3, true);
// Get the current play position (divide by two because we are using 16 bit samples)
int tmp = this.soundBuffer.PlayPosition / 2;
// Generate new sounds from lastWritten to tmp in the sound buffer
if (tmp == lastWritten)
{
continue;
}
else
{
soundData = new short[(tmp - lastWritten + bufferLength) % bufferLength];
}
this.pullAudio(soundData, soundData.Length);
// Write in the generated data
soundBuffer.Write(lastWritten * 2, soundData, LockFlag.None);
// Save the position we were at
lastWritten = tmp;
}
}
}
public void Dispose()
{
this.running = false;
this.Resume();
if (this.soundBuffer != null)
{
this.soundBuffer.Dispose();
}
if (this.soundDevice != null)
{
this.soundDevice.Dispose();
}
}
}
}
The concept is the same that im using but i can't manage to get a set on wave byte [] data to play
I have not done this.
But the first place i would look is XNA.
I know that the c# managed directx project was ditched in favor of XNA and i have found it to be good for graphics - i prefer using it to directx.
what is the reason that you decided not to just use soundplayer, as per this msdn entry below?
private SoundPlayer Player = new SoundPlayer();
private void loadSoundAsync()
{
// Note: You may need to change the location specified based on
// the location of the sound to be played.
this.Player.SoundLocation = http://www.tailspintoys.com/sounds/stop.wav";
this.Player.LoadAsync();
}
private void Player_LoadCompleted (
object sender,
System.ComponentModel.AsyncCompletedEventArgs e)
{
if (this.Player.IsLoadCompleted)
{
this.Player.PlaySync();
}
}
usually i just load them all up in a thread, or asynch delegate, then play or playsynch them when needed.
You can use the DirectSound support in SlimDX: http://slimdx.org/ :-)
You can use nBASS or better FMOD both are great audio libraries and can work nicely together with .NET.
DirectSound is where you want to go. It's a piece of cake to use, but I'm not sure what formats it can play besides .wav
http://msdn.microsoft.com/en-us/library/windows/desktop/ee416960(v=vs.85).aspx
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I would like to create a sound visualisation system using C# language and .NET Framework.
This could look like in Winamp application.
Maybe exist free library or some interesting articles which describe how to do it?
Example:
alt text http://img44.imageshack.us/img44/9982/examplel.png
You can try these links
OpenVP (is a free and open-source platform for developing music visualizations, written in C#), see the OpenVP Screenshots.
Sound visualizer in C#
Play and Visualize WAV Files using Managed Direct Sound
Bye.
Here's a script that computes the FFT of any sound played on the computer using the WASAPI API. It uses CSCore and its WinformsVisualization example:
using CSCore;
using CSCore.SoundIn;
using CSCore.Codecs.WAV;
using WinformsVisualization.Visualization;
using CSCore.DSP;
using CSCore.Streams;
using System;
public class SoundCapture
{
public int numBars = 30;
public int minFreq = 5;
public int maxFreq = 4500;
public int barSpacing = 0;
public bool logScale = true;
public bool isAverage = false;
public float highScaleAverage = 2.0f;
public float highScaleNotAverage = 3.0f;
LineSpectrum lineSpectrum;
WasapiCapture capture;
WaveWriter writer;
FftSize fftSize;
float[] fftBuffer;
SingleBlockNotificationStream notificationSource;
BasicSpectrumProvider spectrumProvider;
IWaveSource finalSource;
public SoundCapture()
{
// This uses the wasapi api to get any sound data played by the computer
capture = new WasapiLoopbackCapture();
capture.Initialize();
// Get our capture as a source
IWaveSource source = new SoundInSource(capture);
// From https://github.com/filoe/cscore/blob/master/Samples/WinformsVisualization/Form1.cs
// This is the typical size, you can change this for higher detail as needed
fftSize = FftSize.Fft4096;
// Actual fft data
fftBuffer = new float[(int)fftSize];
// These are the actual classes that give you spectrum data
// The specific vars of lineSpectrum here aren't that important because they can be changed by the user
spectrumProvider = new BasicSpectrumProvider(capture.WaveFormat.Channels,
capture.WaveFormat.SampleRate, fftSize);
lineSpectrum = new LineSpectrum(fftSize)
{
SpectrumProvider = spectrumProvider,
UseAverage = true,
BarCount = numBars,
BarSpacing = 2,
IsXLogScale = false,
ScalingStrategy = ScalingStrategy.Linear
};
// Tells us when data is available to send to our spectrum
var notificationSource = new SingleBlockNotificationStream(source.ToSampleSource());
notificationSource.SingleBlockRead += NotificationSource_SingleBlockRead;
// We use this to request data so it actualy flows through (figuring this out took forever...)
finalSource = notificationSource.ToWaveSource();
capture.DataAvailable += Capture_DataAvailable;
capture.Start();
}
private void Capture_DataAvailable(object sender, DataAvailableEventArgs e)
{
finalSource.Read(e.Data, e.Offset, e.ByteCount);
}
private void NotificationSource_SingleBlockRead(object sender, SingleBlockReadEventArgs e)
{
spectrumProvider.Add(e.Left, e.Right);
}
~SoundCapture()
{
capture.Stop();
capture.Dispose();
}
public float[] barData = new float[20];
public float[] GetFFtData()
{
lock (barData)
{
lineSpectrum.BarCount = numBars;
if (numBars != barData.Length)
{
barData = new float[numBars];
}
}
if (spectrumProvider.IsNewDataAvailable)
{
lineSpectrum.MinimumFrequency = minFreq;
lineSpectrum.MaximumFrequency = maxFreq;
lineSpectrum.IsXLogScale = logScale;
lineSpectrum.BarSpacing = barSpacing;
lineSpectrum.SpectrumProvider.GetFftData(fftBuffer, this);
return lineSpectrum.GetSpectrumPoints(100.0f, fftBuffer);
}
else
{
return null;
}
}
public void ComputeData()
{
float[] resData = GetFFtData();
int numBars = barData.Length;
if (resData == null)
{
return;
}
lock (barData)
{
for (int i = 0; i < numBars && i < resData.Length; i++)
{
// Make the data between 0.0 and 1.0
barData[i] = resData[i] / 100.0f;
}
for (int i = 0; i < numBars && i < resData.Length; i++)
{
if (lineSpectrum.UseAverage)
{
// Scale the data because for some reason bass is always loud and treble is soft
barData[i] = barData[i] + highScaleAverage * (float)Math.Sqrt(i / (numBars + 0.0f)) * barData[i];
}
else
{
barData[i] = barData[i] + highScaleNotAverage * (float)Math.Sqrt(i / (numBars + 0.0f)) * barData[i];
}
}
}
}
}
Then when retrieving the barData from a different script it's recommended to lock it first since this is modified on a separate thread.
I'm not sure where I got GetSpectrumPoints since it doesn't seem to be in the Github Repo, but here it is. Just paste this into that file and my code should work.
public float[] GetSpectrumPoints(float height, float[] fftBuffer)
{
SpectrumPointData[] dats = CalculateSpectrumPoints(height, fftBuffer);
float[] res = new float[dats.Length];
for (int i = 0; i < dats.Length; i++)
{
res[i] = (float)dats[i].Value;
}
return res;
}