I am making use of NAudio in a C# program I've written.
I want to apply a linear fade at a certain position within a piece of audio I'm working with.
In the NAudio example project is a file called FadeInOutSampleProvider.cs (Cached example) which has BeginFadeIn(double fadeDurationInMilliseconds) and BeginFadeOut(double fadeDurationInMilliseconds) methods.
I've reworked these methods to
BeginFadeOut(double fadeDurationInMilliseconds, double beginFadeAtMilliseconds)
and
BeginFadeOut(double fadeDurationInMilliseconds, double beginFadeAtMilliseconds)
However I'm having difficulty implementing the interval logic for these changes to work.
My first thought was to introduce code in the Read() method. When called it would divide the number of bytes being requested by the sample rate, which would give the number of seconds of audio requested.
I could then keep track of this and when the correct amount of auto data had been read, allow the fade to be applied.
However I'm not getting the numbers in my calculations I would expect to see. I'm sure there's a better way to approach this.
Any help would be very much appreciated.
It sounds like you are working along the right lines. As you say the amount of audio being requested can be calculated by dividing number of samples requested by the sample rate. But you must also take into account channels as well. In a stereo file there are twice as many samples per second as the sample rate.
I've put a very basic code sample of a delayed fade out in a GitHub gist here. There are improvements that could be made such as allowing the fade-out to begin part-way through the audio returned from a call to Read but holpefully this gives you a rough idea of how it can be achieved with a few small modifications to FadeInOutSampleProvider.
The main changes are an extra parameter to BeginFadeOut, that sets two new variables (fadeOutDelaySamples, fadeOutDelayPosition):
/// <summary>
/// Requests that a fade-out begins (will start on the next call to Read)
/// </summary>
/// <param name="fadeDurationInMilliseconds">Duration of fade in milliseconds</param>
public void BeginFadeOut(double fadeAfterMilliseconds, double fadeDurationInMilliseconds)
{
lock (lockObject)
{
fadeSamplePosition = 0;
fadeSampleCount = (int)((fadeDurationInMilliseconds * source.WaveFormat.SampleRate) / 1000);
fadeOutDelaySamples = (int)((fadeAfterMilliseconds * source.WaveFormat.SampleRate) / 1000);
fadeOutDelayPosition = 0;
//fadeState = FadeState.FadingOut;
}
}
Then in the Read method we can keep track of how far into the delay we are, and if so, we can start the fade-out
public int Read(float[] buffer, int offset, int count)
{
int sourceSamplesRead = source.Read(buffer, offset, count);
lock (lockObject)
{
if (fadeOutDelaySamples > 0)
{
fadeOutDelayPosition += sourceSamplesRead / WaveFormat.Channels;
if (fadeOutDelayPosition >= fadeOutDelaySamples)
{
fadeOutDelaySamples = 0;
fadeState = FadeState.FadingOut;
}
}
if (fadeState == FadeState.FadingIn)
{
FadeIn(buffer, offset, sourceSamplesRead);
}
else if (fadeState == FadeState.FadingOut)
{
FadeOut(buffer, offset, sourceSamplesRead);
}
else if (fadeState == FadeState.Silence)
{
ClearBuffer(buffer, offset, count);
}
}
return sourceSamplesRead;
}
Related
I have been using NAudio with the
"Fire and Forget Audio Playback with NAudio" tutorial (thank you Mark for this awesome utility!) as written here:
http://mark-dot-net.blogspot.nl/2014/02/fire-and-forget-audio-playback-with.html
I managed to add a VolumeSampleProvider to it, using the MixingSampleProvider as input. However, when I now play two sounds right after each other, the first sound always gets the volume of the second as well, even though the first is already playing.
So my question is: How do I add sounds with an individual volume per sound?
This is what I used:
mixer = new MixingSampleProvider(waveformat);
mixer.ReadFully = true;
volumeProvider = new VolumeSampleProvider(mixer);
panProvider = new PanningSampleProvider(volumeProvider);
outputDevice.Init(panProvider);
outputDevice.Play();
I realized (thanks to itsmatt) that the only way to make this work, is to leave the mixer alone and adjust the panning and volume of each CachedSound individually, before adding it to the mixer. Therefore I needed to rewrite the CachedSoundSampleProvider, using a pan and volume as extra input parameters.
This is the new constructor:
public CachedSoundSampleProvider(CachedSound cachedSound, float volume = 1, float pan = 0)
{
this.cachedSound = cachedSound;
LeftVolume = volume * (0.5f - pan / 2);
RightVolume = volume * (0.5f + pan / 2);
}
And this is the new Read() function:
public int Read(float[] buffer, int offset, int count)
{
long availableSamples = cachedSound.AudioData.Length - position;
long samplesToCopy = Math.Min(availableSamples, count);
int destOffset = offset;
for (int sourceSample = 0; sourceSample < samplesToCopy; sourceSample += 2)
{
float outL = cachedSound.AudioData[position + sourceSample + 0];
float outR = cachedSound.AudioData[position + sourceSample + 1];
buffer[destOffset + 0] = outL * LeftVolume;
buffer[destOffset + 1] = outR * RightVolume;
destOffset += 2;
}
position += samplesToCopy;
return (int)samplesToCopy;
}
I'm not 100% certain of what you are asking and I don't know if you solved this already but here's my take on this.
ISampleProvider objects play the "pass the buck" game to their source ISampleProvider via the Read() method. Eventually, someone does some actual reading of audio bytes. Individual ISampleProvider classes do whatever they do to the bytes.
MixingSampleProvider, for instance, takes N audio sources... those get mixed. When Read() is called, it iterates the audio sources and reads count bytes from each.
Passing it to a VolumeSampleProvider handles all the bytes (from those various sources) as a group... it says:
buffer[offset+n] *= volume;
That's going to adjust the bytes across the board... so every byte gets adjusted in the buffer by the volume multiplier;
The PanningSampleProvider just provides a multiplier to the stereo audio and adjusts the bytes accordingly, doing the same sort of thing as the VolumeSampleProvider.
If you want to individually handle audio source volumes, you need to handle that upstream of the MixingSampleProvider. Essentially, the things that you pass to the MixingSampleProvider need to be able to have their volume adjusted independently.
If you passed a bunch of SampleChannel objects to your MixingSampleProvider... you could accomplish independent volume adjustment. The Samplechannel class incorporates a VolumeSampleProvider object and provides a Volume property that allows one to set the volume on that VolumeSampleProvider object.
SampleChannel also incorporates a MeteringSampleProvider that provides reporting of the maximum sample value during a given period. It raises an event that gives you an array of those values, one per channel.
I am currently creating a Winforms application for Windows 8.1, I have been able to perform an FFT on the input data from the devices microphone using ASIO Out, however to be able to use ASIO on my machine I needed to download ASIO4ALL,
This is causing a huge amount of feedback in the microphone and is resulting in very inaccurate frequency readings (to make sure it was the sound itself I wrote a copy to disc to playback),
So to get around this I have been trying to adapt my code to work with Naudio's WaveIn class, however this is returning either no data or NaN for the FFT algorithm (although I can save a recording to disk which plays back with no issues),
I've been trying to fix this for some time now and am sure it is just a silly mistake somewhere, any help would be greatly appreciated!
Below is the code for the "OnDataAvailable" event (where I'm 99% sure I am going wrong):
void OnDataAvailable(object sender, WaveInEventArgs e)
{
if (this.InvokeRequired)
{
this.BeginInvoke(new EventHandler<WaveInEventArgs>(OnDataAvailable), sender, e);
}
else
{
byte[] buffer = e.Buffer;
int bytesRecorded = e.BytesRecorded;
int bufferIncrement = waveIn.WaveFormat.BlockAlign;
for (int index = 0; index < bytesRecorded; index += bufferIncrement)
{
float sample32 = BitConverter.ToSingle(buffer, index);
sampleAggregator.Add(sample32);
}
if (waveFile != null)
{
waveFile.Write(e.Buffer, 0, e.BytesRecorded);
waveFile.Flush();
}
}
}
If any more details and/or code is required please let me know.
waveFile: Name of the file writer
e.Buffer: The buffer containing the recorded data
e.BytesRecorded: The total number of bytes recorded
For reference below is the working code when using the ASIO class:
void asio_DataAvailable(object sender, AsioAudioAvailableEventArgs e)
{
byte[] buf = new byte[e.SamplesPerBuffer * 4];
for (int i = 0; i < e.InputBuffers.Length; i++)
{
Marshal.Copy(e.InputBuffers[i], buf, 0, e.SamplesPerBuffer * 4);
}
for (int i = 0; i < e.SamplesPerBuffer * 4; i++)
{
float sample32 = Convert.ToSingle(buf[i]);
sampleAggregator.Add(sample32);
}
}
EDIT: The samples which are being returned are now accurate after changing the convert statement to Int16 as per the advice on this page, I had some other issues in my code which prevented actual results from being returned originally.
However, the file which is being written to disk is very choppy, I'm sure this is a problem with my laptop and the number of processes which is trying to perform, could anyone please advise a way around this issue?
In the NAudio WPF demo project there is an example of calculating FFTs while playback is happening with a class called SampleAggregator, that stores up blocks of 1024 samples and then performs FFTs on them.
It looks like you are trying to do something similar to this. I suspect the problem is that you are getting 16 bit samples, not 32 bit. Try using BitConverter.ToShort on every pair of bytes.
mWaveInDevice = new WaveIn();
mWaveInDevice.WaveFormat = WaveFormat.**CreateIeeeFloatWaveFormat(44100,2)**;
Set CreateIeeeFloatWaveFormat for WaveFormat, and then you will get right values after fft.
I have an application where performance-sensitive drawings occur using a WriteableBitmap. An event is called with CompositionTarget.Rendering to actually update the back buffer of the WriteableBitmap. From the MSDN documentation, that means the event is fired once per frame, right before the control is rendered.
The issue that I am having is that the WriteableBitmap's Lock() function takes an extremely long time, especially at larger bitmap sizes. I have previously read that AddDirtyRegion() has a bug that causes the entire bitmap to invalidate, leading to poor performance. However, that doesn't seem to be the case here. From a good bit of low-level checking, it seems that Lock() opens the bitmap's backbuffer for writing on the render thread, which means every time my event handler is called, it has to thread block until the render thread is ready for it. This leads to a noticeable lag when updating the graphics of the bitmap.
I have already tried adding a timeout to the event handler, using TryLock(), so that it won't block for such a long time and cause the performance degradation. This, however, causes a similar effect in that it appears to lag, because larger numbers of bitmap updates get lumped together.
Here is the relevant code from the event handler to show what exactly I am doing. The UpdatePixels() function was written to avoid using the potentially bugged AddDirtyRect():
void updateBitmap(object sender, EventArgs e)
{
if (!form.ResizingWindow)
{
// Lock and unlock are important... Make sure to keep them outside of the loop for performance reasons.
if (canvasUpdates.Count > 0)
{
//bool locked = scaledDrawingMap.TryLock(bitmapLockDuration);
scaledDrawingMap.Lock();
//if (locked)
//{
unsafe
{
int* pixData = (int*)scaledDrawingMap.BackBuffer;
foreach (Int32Rect i in canvasUpdates)
{
// The graphics object isn't directly shown, so this isn't actually necessary. We do a sort of manual copy from the drawingMap, which acts similarly
// to a back buffer.
Int32Rect temp = GetValidDirtyRegion(i);
UpdatePixels(temp, pixData);
}
scaledDrawingMap.Unlock();
canvasUpdates.Clear();
}
//}
}
}
}
private unsafe void UpdatePixels(Int32Rect temp, int* pixData)
{
//int* pixData = (int*)scaledDrawingMap.BackBuffer;
// Directly copy the backbuffer into a new buffer, to use WritePixels().
var stride = temp.Width * scaledDrawingMap.Format.BitsPerPixel / 8;
int[] relevantPixData = new int[stride * temp.Height];
int srcIdx = 0;
int pWidth = scaledDrawingMap.PixelWidth;
int yLess = temp.Y + temp.Height;
int xLess = temp.X + temp.Width;
for (int y = temp.Y; y < yLess; y++)
{
for (int x = temp.X; x < xLess; x++)
{
relevantPixData[srcIdx++] = pixData[y * pWidth + x];
}
}
scaledDrawingMap.WritePixels(temp, relevantPixData, stride, 0);
}
I can't seem to figure out how to avoid the issue of thread blocking with the WriteableBitmap, and I can't see any obvious faults in the code I have written. Any help or pointers would be much appreciated.
Looks like you are not actually using the BackBuffer to write - only to read.
WritePixels writes to the "front" buffer and does not require a lock.
I don't know if you have some other reason to lock it (other threads doing something), but for the code that's here i don't see why you would need to.
I guess I was wrong about not needing a lock to read from BackBuffer (*pixData) - I thought it was only for writes, but I am positive you do not need to to call Lock for WritePixels.
As far as I can tell, you are doing:
Lock the back buffer
Copy something from it to an array
Call WritePixels using this new array
Unlock the back buffer.
How about switching 3 and 4?
WritePixels may internally cause rendering thread (which has higher priority on the message queue) to get a lock on its behalf which is probably a factor in the delay you are seeing.
I'm working on a project using C#/XNA and I'm having trouble creating water-physics in an top-down hex-based grid.
We're using an hex-tilemap with help of: http://www.redblobgames.com/grids/hexagons.
So I thought I could implement an algoritm for the water flow, but I can't seem to get it right, and it seems to be very performance-heavy.
/// <summary>
/// This method will level all the water to the same height. It does so by looking arround and make them all even
/// </summary>
/// <param name="tiles">The tile array needed to level arround</param>
public void Flow(List<Tile> tiles, Tilemap tileMap)
{
float waterAmountEachTile;
List<Water> waterTiles = new List<Water>(7);
//include self
waterTiles.Add(this);
float waterAmount = (this.waterHeight + this.ZPos);
for (int i = 0; i < tiles.Count; i++)//first loop to get all values
{
if (tiles[i].GetType() == typeof(Water))
{
waterTiles.Add((Water)tiles[i]);//check wich tiles are water and put them in a new array
waterAmount += (waterTiles[waterTiles.Count - 1].waterHeight + waterTiles[waterTiles.Count - 1].ZPos); //Increase the ammount - debuggen later werkt count goed
}
}
waterAmountEachTile = waterAmount / waterTiles.Count; //Calculate how high each tile should be ( we need this for drycheck)
dryCheck(ref waterAmount, waterTiles, waterAmountEachTile, tileMap);
waterAmountEachTile = waterAmount / waterTiles.Count; //recalculate the ammount for each tile
foreach (Water waterTile in waterTiles) //second loop to adjust the tile to the according hight
{
waterTile.waterHeight = (waterAmountEachTile - waterTile.ZPos);
}
}
/// <summary>
/// Checks if the tile should be dry or continue being a water tile.
/// </summary>
/// <param name="waterAmount"> the ammount of water to divide among the tiles</param>
/// <param name="waterTiles">The watertiles list to do the drycheck on</param>
/// <param name="waterAmountEachTile">The height to set each water tile</param>
/// <returns></returns>
private void dryCheck(ref float waterAmount, List<Water> waterTiles, float waterAmountEachTile, Tilemap tileMap)
{
//TODO dit fixen
for (int i = 0; i < waterTiles.Count; i++)
{
if (waterTiles[i].ZPos > waterAmountEachTile) //is grond hoger dan water
{
waterAmount -= waterTiles[i].ZPos;
tileMap.TileMap[waterTiles[i].XPos][waterTiles[i].YPos] = new Ground(this.graphics, waterTiles[i].XPos, waterTiles[i].YPos,
waterTiles[i].ZPos, this.size, Tilemap.HexVertices);
waterTiles.Remove(waterTiles[i]);
i--;
}
}
}
Now for my question, does any of you know of a way to implement water-physics in a top-down environment, preferably with hex-based grids.
I've looked in to several liberaries, and found Smoothed-particle hydrodynamics, but I'm not sure if it's implementable top-down, and I can't seem to find any guides in that direction.
Any help would be great, even some pointers might be enough.
Thanks in advance,
C. Venhuizen
Have you profiled your code to determine what is the slowest part?
I don't quite understand what your code is doing. Do you call Flow once for each tile, or do you call it once, and it runs over all tiles? If I were to guess, I'd say that allocating a new list each tile is going to be pretty slow. But the best way to know is to profile.
The thing that originally led me to write http://www.redblobgames.com/grids/hexagons was a top-down hex game that was all about water flow. I wrote that game in 1995, and I recently ported it to run on the web here. The algorithm I started with is simple. For each hex, traversed in random order:
calculate the water level W + elevation H
calculate the same for each neighbor
make up to half the water flow to the neighbor with the lowest W + H
The random order is so that the loop doesn't cause water to flow many hexes to the right but not many hexes to the left, or other such direction artifacts. Even cleaner would be to use a double buffer for this: write all the new values to a separate array, and then copy them back to the first array at the end (or swap the arrays).
Beyond that, there are tons of heuristics I used for erosion and other features of the (unfinished) game. The code is old and awful but if you want to take a look, you can download it here (ZIP file). See water.cpp. I avoided memory allocations in the water flow loop.
I am using NAudio to play a sinewave of a given frequency as in the blog post Playback of Sine Wave in NAudio. I just want the sound to play() for x milliseconds and then stop.
I tried a thread.sleep, but the sound stops straightaway. I tried a timer, but when the WaveOut is disposed there is a cross-thread exception.
I tried this code, but when I call beep the program freezes.
public class Beep
{
public Beep(int freq, int ms)
{
SineWaveProvider32 sineWaveProvider = new SineWaveProvider32();
sineWaveProvider.Amplitude = 0.25f;
sineWaveProvider.Frequency = freq;
NAudio.Wave.WaveOut waveOut = new NAudio.Wave.WaveOut(WaveCallbackInfo.FunctionCallback());
waveOut.Init(sineWaveProvider);
waveOut.Play();
Thread.Sleep(ms);
waveOut.Stop();
waveOut.Dispose();
}
}
public class SineWaveProvider32 : NAudio.Wave.WaveProvider32
{
int sample;
public SineWaveProvider32()
{
Frequency = 1000;
Amplitude = 0.25f; // Let's not hurt our ears
}
public float Frequency { get; set; }
public float Amplitude { get; set; }
public override int Read(float[] buffer, int offset, int sampleCount)
{
int sampleRate = WaveFormat.SampleRate;
for (int n = 0; n < sampleCount; n++)
{
buffer[n + offset] = (float)(Amplitude * Math.Sin((2 * Math.PI * sample * Frequency) / sampleRate));
sample++;
if (sample >= sampleRate)
sample = 0;
}
}
The SineWaveProvider32 class doesn't need to indefinitely provide audio. If you want the beep to have a maximum duration of a second (say), then for mono 44.1 kHz, you need to provide 44,100 samples. The Read method should return 0 when it has no more data to supply.
To make your GUI thread not block, you need to get rid of Thread.Sleep, waveOut.Stop and Dispose, and simply start playing the audio (you may find window callbacks more reliable than function).
Then, when the audio has finished playing, you can close and clean up the WaveOut object.
Check out the blog post Pass Variables to a New Thread in C# on how to pass variables to another thread.
I think what you want to do is something like create a thread that plays your sound, create a timer, and start the thread. When the timer expires, kill the thread, and when the thread closes have it do all the cleanup.