This question already has answers here:
What is a NullReferenceException, and how do I fix it?
(27 answers)
Closed 3 years ago.
I am creating my own software synthesizer in C#, using Naudio, and am starting off by generating some simple sine waves and then playing them. I would like to do this myself, rather than using Naudio's inbuilt SignalGenerator.
When passing my custom object sine to generate a sine wave (that implements ISampleProvider) to WaveOutEvent.Init, I get thrown a NullReferenceException. I have created an instance of the class SineWave prior to calling init, but I still get that NullReferenceException.
I have tested whether both my WaveOutEvent (wo) and SineWave (sine) objects are null, with if commands, like this:
if (sineWave != null)
{
Console.WriteLine("sine is not null");
}
if (waveOut != null)
{
Console.WriteLine("wo is not null");
}
Both of these statements pass, and both sine is not null and wo is not null are written to the console.
namespace AddSynth
{
public class SineWave : ISampleProvider
{
public WaveFormat WaveFormat { get; }
int frequency = 440;
int sampleRate = 44100;
double amp = 0.25;
int phase = 0;
public int Read(float[] buffer, int offset, int count)
{
int sampleCount = sampleRate / frequency;
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = (float)(amp * Math.Sin(2 * Math.PI * frequency * i + phase));
}
return sampleCount;
}
}
public class Playback
{
static void Main()
{
Playback playBack = new Playback();
playBack.playAudio();
}
public void playAudio()
{
WaveOutEvent waveOut = new WaveOutEvent();
SineWave sineWave = new SineWave();
if (sineWave != null)
{
Console.WriteLine("sine is not null");
}
if (waveOut != null)
{
Console.WriteLine("wo is not null");
}
waveOut.Init(sineWave.ToWaveProvider());
waveOut.Play();
Console.ReadKey();
}
}
}
I expect for a sine wave to be played through my computer's audio. I hope I've added enough info.
EDIT: Just realised that I probably should have added the stack trace as well, so here it is, if it helps:
at NAudio.Wave.SampleProviders.SampleToWaveProvider..ctor(ISampleProvider source)
at NAudio.Wave.WaveExtensionMethods.ToWaveProvider(ISampleProvider sampleProvider)
at AddSynth.Playback.playAudio() in C:\Users\User1\source\repos\AddSynth\AddSynth\Program.cs:line 44
at AddSynth.Playback.Main() in C:\Users\User1\source\repos\AddSynth\AddSynth\Program.cs:line 29
As Nkosi said, the problem was that I never assigned the WaveFormat variable.
Related
I am following this example but it is not that useful:
https://github.com/xamarin/urho-samples/tree/master/FeatureSamples/Core/29_SoundSynthesis
anyhow I am getting an run time error that says: The application is not configured yet.
but I made an application object .
the error happen a node = new Node();
what am I missing
this is my class:
using System;
using Urho.Audio;
using Urho;
using Urho.Resources;
using Urho.Gui;
using System.Diagnostics;
using System.Globalization;
namespace Brain_Entrainment
{
public class IsochronicTones : Urho.Application
{
/// Scene node for the sound component.
Node node;
/// Sound stream that we update.
BufferedSoundStream soundStream;
public double Frequency { get; set; }
public double Beat { get; set; }
public double Amplitude { get; set; }
public float Bufferlength { get; set; }
const int numBuffers = 3;
public IsochronicTones(ApplicationOptions AppOption) : base(AppOption)
{
Amplitude = 1;
Frequency = 100;
Beat = 0;
Bufferlength = Int32.MaxValue;
}
public void play()
{
Start();
}
protected override void OnUpdate(float timeStep)
{
UpdateSound();
base.OnUpdate(timeStep);
}
protected override void Start()
{
base.Start();
CreateSound();
}
void CreateSound()
{
// Sound source needs a node so that it is considered enabled
node = new Node();
SoundSource source = node.CreateComponent();
soundStream = new BufferedSoundStream();
// Set format: 44100 Hz, sixteen bit, mono
soundStream.SetFormat(44100, true, false);
// Start playback. We don't have data in the stream yet, but the
//SoundSource will wait until there is data
// as the stream is by default in the "don't stop at end" mode
source.Play(soundStream);
}
void UpdateSound()
{
// Try to keep 1/10 seconds of sound in the buffer, to avoid both
//dropouts and unnecessary latency
float targetLength = 1.0f / 10.0f;
float requiredLength = targetLength -
Bufferlength;//soundStream.BufferLength;
float w = 0;
if (requiredLength < 0.0f)
return;
uint numSamples = (uint)(soundStream.Frequency * requiredLength);
if (numSamples == 0)
return;
// Allocate a new buffer and fill it with a simple two-oscillator
//algorithm.The sound is over - amplified
// (distorted), clamped to the 16-bit range, and finally lowpass -
//filtered according to the coefficient
var newData = new short[numSamples];
for (int i = 0; i < numSamples; ++i)
{
float newValue = 0;
if (Beat == 0)
{
newValue = (float)(Amplitude * Math.Sin(Math.PI * Frequency * i / 44100D));
}
else
{
w = (float)(1D * Math.Sin(i * Math.PI * Beat / 44100D));
if (w < 0)
{
w = 0;
}
newValue = (float)(Amplitude * Math.Sin(Math.PI * Frequency * i / 44100D));
}
//accumulator = MathHelper.Lerp(accumulator, newValue, filter);
newData[i] = (short)newValue;
}
// Queue buffer to the stream for playback
soundStream.AddData(newData, 0, newData.Length);
}
}
}
I am working on a click generator using nAudio. As a test, I created a ISampleProvidor class to read/play from an audio file. The iSampleProvider reads in a PCM (32-bit ieee) wav file and then plays it back using the WaveOut player. The WaveOut plays only about 1/4 of the audio passed via the IsampleProvidor Read() method. This results in a choppy playback. The ISampleProvider read() method requests the correct amount of data at the correct time intervals, but the WaveOut only plays the first 25% of the samples provided back to the interface. Any Idea how to address this, or am I using the wrong classes to build a click track (the BufferedWaveProvider might also work, but it only buffers 5 seconds of audio)?
public void TestSampleProvider()
{
ISampleProvider mySamples = new MySamples();
var _waveOut = new WaveOut(WaveCallbackInfo.FunctionCallback()) {DeviceNumber = 0};
_waveOut.Init(mySamples);
_waveOut.Play();
Console.ReadLine();
}
public class MySamples : ISampleProvider
{
private float[] samplesFloats;
private int position;
private WaveFileReader clicksound;
public int Read(float[] buffer, int offset, int count)
{
var copyCount = count;
if (position + copyCount > samplesFloats.Count())
{
copyCount = samplesFloats.Count() - position;
}
Console.WriteLine("samplesFloats {0} position {1} copyCount {2} offset {3} time {4}", samplesFloats.Count(), position, copyCount, offset, DateTime.Now.Millisecond);
Buffer.BlockCopy(samplesFloats, position, buffer, offset, copyCount);
position += copyCount;
return copyCount;
}
public MySamples()
{
clicksound = new WaveFileReader(#"C:\temp\sample.wav");
WaveFormat = clicksound.WaveFormat;
samplesFloats = new float[clicksound.SampleCount];
for (int i = 0; i < clicksound.SampleCount; i++)
{
samplesFloats[i] = clicksound.ReadNextSampleFrame()[0];//it;s a mono file
}
}
public WaveFormat WaveFormat { get; private set; }
}
I think there may be an issue with the WaveOut using the ISampleProvider, so I used the IWaveProvider interface to do the same thing. In fact, here's a bare bones class for sending a non-ending click to the waveout. This might run into memory issues if you let it run a long time but for pop songs it should be fine. Also this will only work for 32-bit files (note the *4 on the byte buffer)
public class MyClick : IWaveProvider
{
private int position;
private WaveFileReader clicksound;
private byte[] samplebuff;
MemoryStream _byteStream = new System.IO.MemoryStream();
public MyClick(float bpm=120)
{
clicksound = new WaveFileReader(#"click_sample.wav");
var bpmsampleslen = (60 / bpm) * clicksound.WaveFormat.SampleRate;
samplebuff = new byte[(int) bpmsampleslen*4];
clicksound.Read(samplebuff, 0,(int) clicksound.Length);
_byteStream.Write(samplebuff, 0, samplebuff.Length);
_byteStream.Position = 0;
WaveFormat = clicksound.WaveFormat;
}
public int Read(byte[] buffer, int offset, int count)
{
//we reached the end of the stream add another one to the end and keep playing
if (count + _byteStream.Position > _byteStream.Length)
{
var holdpos = _byteStream.Position;
_byteStream.Write(samplebuff, 0, samplebuff.Length);
_byteStream.Position = holdpos;
}
return _byteStream.Read(buffer, offset, count);
}
public WaveFormat WaveFormat { get; private set; }
}
(Newbie question)
NAudio allows to start playing an MP3 file from a given position (by converting it from ms into bytes using Waveformat.AverageBytesPerSecond), but is it possible to make it stop playing exactly at another given position (in ms)? Do I have to somehow manipulate the wavestream or there are easier ways?
There is a solution using a Timer simultaneously together with starting playback and stopping playback after the timer ticks, but it doesn't produce reliable results at all.
I'd create a custom IWaveProvider that only returns a maximum of a specified number of bytes from Read. Then reposition your Mp3FileReader to the start, and pass it in to the custom trimming wave provider
Here's some completely untested example code to give you an idea.
class TrimWaveProvider
{
private readonly IWaveProvider source;
private int bytesRead;
private readonly int maxBytesToRead;
public TrimWaveProvider(IWaveProvider source, int maxBytesToRead)
{
this.source = source;
this.maxBytesToRead = maxBytesToRead;
}
public WaveFormat WaveFormat { get { return source.WaveFormat; } }
public int Read(byte[] buffer, int offset, int bytesToRead)
{
int bytesToReadThisTime = Math.Min(bytesToRead, maxBytesToRead - bytesRead);
int bytesReadThisTime = source.Read(buffer, offset, bytesToReadThisTime);
bytesRead += bytesReadThisTime;
return bytesReadThisTime;
}
}
// and call it like this...
var reader = new Mp3FileReader("myfile.mp3");
reader.Position = reader.WaveFormat.AverageBytesPerSecond * 3; // start 3 seconds in
// read 5 seconds
var trimmer = new TrimWaveProvider(reader, reader.WaveFormat.AverageBytesPerSecond * 5);
WaveOut waveOut = new WaveOut();
waveOut.Init(trimmer);
What is my code:
void SpeakThreadFunction()
{
while (SpeakThreadState)
{
Speaker.Play();
Thread.Sleep(100);
Speaker.Stop()
Thread.Sleep(Interval);
}
}
//Speaker is WaveOut
And Speaker.Init is SineWaveProvider32.
public class SineWaveProvider32 : WaveProvider32
{
int sample;
public SineWaveProvider32()
{
Frequency = 1000;
Amplitude = 0.25f;
}
public float Frequency { get; set; }
public float Amplitude { get; set; }
public override int Read(float[] buffer, int offset, int sampleCount)
{
int sampleRate = WaveFormat.SampleRate;
for (int n = 0; n < sampleCount; n++)
{
buffer[n + offset] = (float)(Amplitude * Math.Sin((2 * Math.PI * sample * Frequency) / sampleRate));
sample++;
if (sample >= sampleRate) sample = 0;
}
return sampleCount;
}
}
After 10-15 iterations on my cycle, sound is stop :(. What i need to do, to my sound repeat all time?
You won't have much success trying to continually start and stop the soundcard like that. The default WaveOut buffer sizes in NAudio are 100ms long. It would be much better to open the soundcard once, and then send it portions of sine wave, followed by zeroes, to create the sound you want.
I have a background sound playing in an endless loop. I want it to fade out when the user does press a button.
I tried the following:
A DirectSoundOut is initiated with the WaveStream
A Timer changes the volume of the WaveChannel32.
The Problem:
Changing the volume while the sound is playing produces noises.
Does anyone knows a better solution?
To perform a smooth fade-in or fade-out, you need to do so at the sample level. You then multiply each sample by a gradually increasing or decreasing number. You are using WaveChannel32, so your audio has already been converted to 32 bit float. I would then create another IWaveProvider implementer that was responsible for doing the fade in and fade out. Normally it would pass through samples unchanged, but in the Read method, if you are in a fade in or fade out, it would multiply each sample (or pair if it is stereo).
The ISampleProvider interface in NAudio 1.5 was designed to make this type of thing much easier, as it allows you to deal samples already as 32 bit floats, rather than implementing IWaveProvider which requires you to convert from a byte[] to float[]. Here's a SampleProvider for fade-in and fade-out I just made which I will include in the next NAudio, and hopefully blog about it soon. Just call BeginFadeIn or BeginFadeOut with the appropriate fade duration.
public class FadeInOutSampleProvider : ISampleProvider
{
enum FadeState
{
Silence,
FadingIn,
FullVolume,
FadingOut,
}
private readonly object lockObject = new object();
private readonly ISampleProvider source;
private int fadeSamplePosition;
private int fadeSampleCount;
private FadeState fadeState;
public FadeInOutSampleProvider(ISampleProvider source)
{
this.source = source;
this.fadeState = FadeState.FullVolume;
}
public void BeginFadeIn(double fadeDurationInMilliseconds)
{
lock (lockObject)
{
fadeSamplePosition = 0;
fadeSampleCount = (int)((fadeDurationInMilliseconds * source.WaveFormat.SampleRate) / 1000);
fadeState = FadeState.FadingIn;
}
}
public void BeginFadeOut(double fadeDurationInMilliseconds)
{
lock (lockObject)
{
fadeSamplePosition = 0;
fadeSampleCount = (int)((fadeDurationInMilliseconds * source.WaveFormat.SampleRate) / 1000);
fadeState = FadeState.FadingOut;
}
}
public int Read(float[] buffer, int offset, int count)
{
int sourceSamplesRead = source.Read(buffer, offset, count);
lock (lockObject)
{
if (fadeState == FadeState.FadingIn)
{
FadeIn(buffer, offset, sourceSamplesRead);
}
else if (fadeState == FadeState.FadingOut)
{
FadeOut(buffer, offset, sourceSamplesRead);
}
else if (fadeState == FadeState.Silence)
{
ClearBuffer(buffer, offset, count);
}
}
return sourceSamplesRead;
}
private static void ClearBuffer(float[] buffer, int offset, int count)
{
for (int n = 0; n < count; n++)
{
buffer[n + offset] = 0;
}
}
private void FadeOut(float[] buffer, int offset, int sourceSamplesRead)
{
int sample = 0;
while (sample < sourceSamplesRead)
{
float multiplier = 1.0f - (fadeSamplePosition / (float)fadeSampleCount);
for (int ch = 0; ch < source.WaveFormat.Channels; ch++)
{
buffer[offset + sample++] *= multiplier;
}
fadeSamplePosition++;
if (fadeSamplePosition > fadeSampleCount)
{
fadeState = FadeState.Silence;
// clear out the end
ClearBuffer(buffer, sample + offset, sourceSamplesRead - sample);
break;
}
}
}
private void FadeIn(float[] buffer, int offset, int sourceSamplesRead)
{
int sample = 0;
while (sample < sourceSamplesRead)
{
float multiplier = (fadeSamplePosition / (float)fadeSampleCount);
for (int ch = 0; ch < source.WaveFormat.Channels; ch++)
{
buffer[offset + sample++] *= multiplier;
}
fadeSamplePosition++;
if (fadeSamplePosition > fadeSampleCount)
{
fadeState = FadeState.FullVolume;
// no need to multiply any more
break;
}
}
}
public WaveFormat WaveFormat
{
get { return source.WaveFormat; }
}
}
Or you can just, do this:
while (waveOut.volume > 0.1)
{
waveOut.volume -= 0.1;
System.Threading.Thread.Sleep(10);
}
^ An example of a fade-out. I use it in my own programs, works fine.