dso = new DirectSoundOut(Guid.Parse(AudioOutDevice));
var ms = new MemoryStream(soundArray.ToArray()))
{
IWaveProvider provider = new RawSourceWaveStream(ms, new WaveFormat());
dso.Init(provider);
dso.Play();
Thread.Sleep(3000);
}
I am able to play sound array through desired output device using the above code, and i am unable to hear the sound if there is thread.sleep. But I am unable to understand the reason for using thread.sleep. Can any one let me know the reason for thread.sleep()
The call to Play is not blocking. It simply starts playback. So you must keep dso alive until playback ends or you have stopped it manually.
You can use code like this if you want to block yourself (obviously only use this if your audio isn't infinitely long)
dso.Play();
while (dso.PlaybackState == PlaybackState.Playing)
{
Thread.Sleep(500);
}
dso.Dispose();
Related
For the sake of making a (shuffled) playlist, I've made a separate thread in which I load and play each song in the playlist. The background stuff (wav files, file paths, playlists and shuffling) all work without a hitch.
The issue is that I have 2 windows, each of which can close and open the other. Each window has a different playlist, and when I switch to the other window, I want my static SoundPlayer to stop playing, then start playing the new playlist.
This currently isn't working: currently, the application waits until the current track is finished before displaying the next window and starting the other playlist. Yes, the entire application waits on this.
I'm new to thread coding, so I'm not really sure what to do. The two methods of stopping this I've tried so far have been SoundPlayer.Stop() and Thread.Abort(). Neither changes the situation at all.
In each window:
Thread playlistThread;
public Window()
{
InitializeComponent();
MusicPlayer.music.Stop();
playlistThread = new Thread(() => MusicPlayer.PlayPlaylist(MusicPlayer.ShufflePlaylist(MusicPlayer.PlaylistFromType("[insert track type]"), random)));
playlistThread.Start();
PlayPlaylist which I will show next takes a List of strings, so don't worry about the Thread line, it's just a few sections put into one. The properties after that simply generate that list, and again, that all works, but I can show it if anyone thinks it's necessary. Here is the PlayPlaylist method:
public static void PlayPlaylist(List<string> tracks)
{
for (int i = 0; i < tracks.Count; i++)
{
music.SoundLocation = tracks[i];
music.PlaySync();
}
}
Here's the answer I worked out:
public static void PlayTrack(List<string> tracks, int i)
{
while (true)
{
if (i == tracks.Count)
{
tracks = MusicPlayer.ShufflePlaylist(tracks, MusicPlayer.random);
i = 0;
}
music.SoundLocation = tracks[i];
int l = SoundInfo.GetSoundLength(tracks[i]);
music.Play();
while (l > 0)
{
Thread.Sleep(1000);
l -= 1000;
}
i++;
}
}
The SoundInfo class with its GetSoundLength method I found here.
The reason this method works while others do not is because of how Play() and PlaySync() work. PlaySync() plays the entire .wav file in the current thread, with nothing else running until it finishes. Thus, even SoundPlayer.Stop() and Thread.Abort() do not work, as they insert a new line after the current one.
By running this method in a new thread, you avoid PlaySync() giving you that issue. However, it will still be impossible to stop the track ahead of time using PlaySync(). This is why you use Play() instead.
Therein lies a second issue, however: Play() plays the track in its own thread, meaning the rest of the code will continue. This is a big risk if you're wanting to do anything only after the current track finishes.
The answer is to calculate the length of the track you're going to play. Then simply create a while loop, running until l (given by GetSoundLength()) reaches 0. In each pass through the loop the thread (separate from your window's main thread) sleeps for 1 second. This is fine on the CPU and means that every second extra code, such as SoundPlayer.Stop(), can be injected into the thread.
I'm using WaveInEvent of NAudio to record microphone data. It works fine for a while, but after a few times, it stops providing input data- the DataAvailable callback is never called with new data.
I have tried creating a new WaveInEvent each time, but this did not resolve the problem. I've also tried using the WASAPI input, which always called DataAvailable - with zero bytes of data.
How can I record audio from the microphone reliably with NAudio?
Currently, my code looks like this:
StartRecording() {
microphone = new WaveInEvent();
microphone.DeviceNumber = 0;
microphone.WaveFormat = outformat;
microphone.BufferMilliseconds = 50;
microphone.DataAvailable += (_, recArgs) =>
{
session.OnAudioData(recArgs.Buffer, recArgs.BytesRecorded);
};
microphone.StartRecording();
}
StopRecording() {
if (microphone != null)
{
microphone.StopRecording();
microphone.Dispose();
microphone = null;
}
}
There's no other naudio code in the project except using WaveFormat to describe wave formats.
NAudio throws an access violation exception trying to call WaveInBuffer.Reuse() from a threadpool worker. I'm not sure why this doesn't do something more serious than just drop audio data.
For the condition where I did not recreate the WaveInEvent, I get an MmException instead- invalid handle calling waveInPrepareHeader, in the same place.
Frankly, the fact that I get different results heavily implies that NAudio is doing some funky shit it shouldn't to share state between instances, and looking at the source on Codeplex, I'm not really sure WTF is going on.
It seems that the drivers for the USB microphone do not behave correctly. When the buffer is sent to the user through the WIM_DATA message, it is full. However when waveInUnprepareHeader is called, it's still in the queue, even though it was literally just passed as full. So I think that the drivers for the microphone are ultimately to blame.
I've been looking more closely at the microphone and it seems that this particular unit is actually known to have been damaged.
I would like transfer mixed sound from WCF server to all connected clients. Using WCF service callbacks for this. Sound is mixed using naudio library.
Here is little example of server-side (WCF method):
MixingSampleProvider _mixer = new MixingSampleProvider(sound32.WaveFormat);
SampleToWaveProvider _sampleToWave = new SampleToWaveProvider(_mixer);
// service method
byte[] buffer = new byte[1000];
do{
_sampleToWave.Read(buffer, 0, 1000);
client.Callback.SendBuffer(buffer);
} while (_isPlaying)
and client-side:
BufferedWaveProvider _bufferedWave = new BufferedWaveProvider(sound32.WaveFormat);
// DirectSoundOut _output = new DirectSoundOut();
WaveOut _output = new WaveOut();
_output.Init(_bufferedWave);
// callback event method
if (_output.PlaybackState != PlaybackState.Playing)
_bufferedWave.AddSamples(buffer, 0, 1000);
// now in timer_tick event method
// if(_bufferedWave.BufferedDuration.TotalSeconds > 0.5)
// _output.Play();
// else
// _output.Pause();
I'm new in this, so I have a few questions.
Is this idea a good one? Is there simpler option to handle this?
[EDIT_1] I created test app with local two methods, which should simulate this and I found, that _bufferedWave.BufferedBytes are not clearing when is buffered sound playing (and it will overflow immediately). Can somebody tell me, why?
[EDIT_1] Changed type of _output field from DirectSoundOut to WaveOut and it's helpful.
Second change I did was, that I added DispatcherTimer to handle when is buffered duration greater than 0.5 (according naudio MP3Streaming example).
Now, I'm fighting with buffer time. I can hear sound only for time in _timer_Tick event method:
_bufferedWave.BufferedDuration.TotalSeconds > XX // this XX is time I can hear sound
Any ideas or opinions?
I'm not sure that this will work the way you hope. WCF is TCP based and TCP is not designed for broadcasting audio video or anything that requires speed (like games) due to it's constant checking of packets.
I have previously used naudio to transmit sound over a network to a listener client but for it to work you will need to use UDP.
If you want this to work over the internet then you will need to look into UDP hole-punching.
Also before you transmit your audio you should compress it from 16bit to 8bit and then back to 16bit upon receiving it using something like an ALaw or MuLaw Decoder/Encoder.
I am following a Java example that uses a Completion Service to submit queries to a 3rd party app that receives packets by calling:
completionService.submit(new FetchData());
Then it calls:
Future<Data> future = completionService.take();
Data data = future.get(timeout, TimeUnit.MILLISECONDS);
Which waits for one of the submitted tasks to finish and returns the data. These two calls are in a while(true) loop.
I am developing an app in c# and I was wondering if this is the proper way to wait for packets and if it is how do I do it in c#.
I have tried this but I'm not sure if I am doing it right:
new Thread(delegate() {
Dictionary<ManualResetEvent, FetchData> dataDict = new Dictionary<ManualResetEvent, FetchData>();
ManualResetEvent[] doneEvents;
ManualResetEvent doneEvent;
FetchData fetch;
int index;
while(true) {
// Create new fetch
doneEvent = new ManualResetEvent(false);
fetch = new FetchData(this, doneEvent);
// event -> fetch association
dataDict.Add(doneEvent, fetch);
ThreadPool.QueueUserWorkItem(fetch.DoWork);
doneEvents = new ManualResetEvent[dataDict.Count];
dataDict.Keys.CopyTo(doneEvents, 0);
// wait for any of them to finish
index = WaitHandle.WaitAny(doneEvents, receiveThreadTimeout);
// did we timeout?
if (index == WaitHandle.WaitTimeout) {
continue;
}
// grab done event
doneEvent = doneEvents[index];
// grab fetch
fetch = dataDict[doneEvent];
// remove from dict
dataDict.Remove(doneEvent);
// process data
processData(fetch.GetData());
}
}).Start();
EDIT: One last note, I am using this in Unity which uses Mono 2.6 and is limited to .NET 2.0
EDIT 2: I changed the code around some. I realized that the ThreadPool has its own max limit and will queue up tasks if there are no threads left, so I removed that logic from my code.
Do you really need to use multithread in your Unity3D application? I'm asking this because Unity "is not" multi-threaded: there's a way to deal with threads but you'd better rely on coroutines to do this. Please refer to this documentation to find more about coroutines.
One note: if you are using Unity 3.5, it uses Mono 2.6.5 that supports almost everything of .NET 4.0. I don't know about the Task class, but it certainly covers .NET 3.0.
It turns out that I only need a single thread to listen for packets, so I don't have to use a thread pool like in my example above.
I am playing with using TTS built into .NET 4 and want the speech to happen immediately, but am instead encountering a lag between when I call Speak and when I get the audio.
I am developing a simple count-down timer that calls off the last five seconds and completion (5... 4... 3... 2... 1... Done), but when the screen updates with the new time, the TTS lags behind, getting worse for every invocation. I tried using SpeakAsync, but this only made it worse. Currently, Speak is being called outside the UI thread (in the Timer Tick event handler).
Is there a way to minimize this lag, such as pre-computing the speech and caching it or creating some kind of special TTS thread?
I somehow read past the API call I needed at least a hundred times. I was looking for SpeechSynthesizer.SetOutputToWaveStream.
MemoryStream stream = new MemoryStream();
SpeechSynthesizer synth = new SpeechSynthesizer();
synth.SetOutputToWaveStream(stream);
synth.Play(text);
stream.Position = 0;
SoundPlayer player = new SoundPlayer(stream);
player.Play();
This code will use TTS to turn text into a WAV file that is streamed into stream. You need to reset the position of the MemoryStream so that when you create a SoundPlayer from it, it starts reading the stream from the beginning instead of the end. Once you have the SoundPlayer initialized, you can save it somewhere so you can play it later instantly instead of having to wait for the TTS to initialize and play the sound.