My code creates a power point presentation and three audio files.
I want to know the length of those audio files after the presentation is created so I use:
double duration = 0;
WindowsMediaPlayer wmp = new WindowsMediaPlayer();
IWMPMedia mediainfo = wmp.newMedia(file);
duration = mediainfo.duration;
wmp.close();
return duration;
To create the audio files I use
public void CreateAudio(string text)
{
y++;
synth = new SpeechSynthesizer();
AudioStream = new FileStream(folder + #"\audio\a" + #y.ToString() + #".wav", FileMode.OpenOrCreate);
//synth.SpeakCompleted += new EventHandler<SpeakCompletedEventArgs>(synth)
synth.SetOutputToWaveStream(AudioStream);
synth.SpeakAsync(text);
}
private void synth_SpeakCompleted(object sender, SpeakCompletedEventArgs e)
{
synth.Dispose();
}
The problem is that after the presentation is created, only the shortest audio file returns a length, the rest return 0.
If I check manually, I see 3 audio files with a valid length property, but the program doesn't read that for some reason.
My attempt at disposing the synth didn't change a thing, nor did using syth.speak so I must have done something terribly wrong at using and managing my objects and memory but I don't know what or where.
If I use the code to check the length of the audio files created in a different code for example, it works perfectly, just when I create them and want to check their length something goes wrong.
Related
I'm trying to use NAudio to record some sound in C# using WasapiLoopbackCapture and WaveFileWriter.
The problem is that after the recording is finished, the "size" field in the WAV/RIFF header is set to 0, rendering the file unplayable.
I'm using the following code:
WasapiLoopbackCapture CaptureInstance = null;
WaveFileWriter RecordedAudioWriter = null;
void StartSoundRecord()
{
string outputFilePath = #"C:\RecordedSound.wav";
// Redefine the capturer instance with a new instance of the LoopbackCapture class
CaptureInstance = new WasapiLoopbackCapture();
// Redefine the audio writer instance with the given configuration
RecordedAudioWriter = new WaveFileWriter(outputFilePath, CaptureInstance.WaveFormat);
// When the capturer receives audio, start writing the buffer into the mentioned file
CaptureInstance.DataAvailable += (s, a) =>
{
// Write buffer into the file of the writer instance
RecordedAudioWriter.Write(a.Buffer, 0, a.BytesRecorded);
};
// When the Capturer Stops, dispose instances of the capturer and writer
CaptureInstance.RecordingStopped += (s, a) =>
{
RecordedAudioWriter.Dispose();
RecordedAudioWriter = null;
CaptureInstance.Dispose();
};
// Start audio recording !
CaptureInstance.StartRecording();
}
void StopSoundRecord()
{
if(CaptureInstance != null)
{
CaptureInstance.StopRecording();
}
}
(Borrowed from: https://ourcodeworld.com/articles/read/702/how-to-record-the-audio-from-the-sound-card-system-audio-with-c-using-naudio-in-winforms )
Which I'm testing with simply:
StartSoundRecord();
Thread.Sleep(10000);
StopSoundRecord();
What am I missing, why isn't WaveFileWriter writing the size field? I have tried also calling the Flush() and Close() methods before disposing. But it makes no difference.
Sure, I could write a method to find out the size of the file and manually writing it to the final file, but that seems unnecessary.
Found the solution.
Calling RecordedAudioWriter.Flush() after every Write made it work just fine.
Don't know if doing so might be inefficient (as I assume flush is blocking until data is written to disk), but for my application that is not an issue.
I am working in Windows 10 platform using c#. I'm developing a method that will monitor video files being created. I want to know when they are complete and when they are being created. The files are very large and so I think I should be able to get the size of the file and then 5 seconds later check to see if it has grown.
private void main()
{
string file = Path.Combine(roomPath, currentFile);
FileInfo fil = new FileInfo(file);
double previousSize = fil.Length;
Thread.Sleep(5000);
string file2 = Path.Combine(roomPath, currentFile);
FileInfo fil2 = new FileInfo(file2);
double currentSize = fil2.Length;
}
I expected to get a snapshot of the size of the file at 2 different instances but, although the file should be much larger after 5 seconds they are both the same.
I am trying to play a file with the written content of my stream to it. It's really strange because if i just go in and play it manually, that works but whenever i try to play it with the program, there is no sound comming from the clip. (There is content in the file). I downloaded a music file just for test and swapped that name with the "fileName" string variable and that works fine playing the file with the program.
public void PlayAudio(object sender, GenericEventArgs<Stream> args)
{
string fileName = $"{ Guid.NewGuid() }.mp3";
using (var file = File.OpenWrite(Path.Combine(Directory.GetCurrentDirectory(), fileName)))
{
args.EventData.CopyTo(file);
file.Flush();
}
WaveOut waveOut = new WaveOut();
Mp3FileReader reader = new Mp3FileReader(fileName); // If i change "fileName" to my music test file, the program can play it fine. But whenever i switch to the created file name from the Stream. It doesnt play it :O
waveOut.Init(reader);
waveOut.Play();
}
I need to use NAudio because this is going to be running on .net core. So i cant use SoundPlayer just for general information.
Background on project. Before i needed it to .net core, i was just running this code which works perfectly. Plays up the audio directly from the api. However, now i cant use this because .net core doesnt support system.media. hens why i have figured out that i need to load the data into a file, mp3 or wav doesnt mather for me and then play that file up with the content inside.
public void PlayAudio(object sender, GenericEventArgs<Stream> args)
{
SoundPlayer player = new SoundPlayer(args.EventData);
player.PlaySync();
args.EventData.Dispose();
}
I managed to solve it. I do not really know what i did. Because i am certain that i tried this earlier. I did restart pc etc but ye it works now ..
public void PlayAudio(object sender, GenericEventArgs<Stream> args)
{
string fileName = $"{ Guid.NewGuid() }.wav";
using (var file = File.OpenWrite(Path.Combine(Directory.GetCurrentDirectory(), fileName)))
{
args.EventData.CopyTo(file);
file.Flush();
Console.WriteLine(fileName);
}
WaveOut waveOut = new WaveOut();
WaveFileReader reader = new WaveFileReader(fileName);
waveOut.Init(reader);
waveOut.Play();
}
I have added a notification sound for some text message as a reference of the Main file of my project and try to make it work as follows
System.Reflection.Assembly a = System.Reflection.Assembly.GetExecutingAssembly();
System.IO.Stream s = a.GetManifestResourceStream("SignInSound.wav");
System.Media.SoundPlayer player = new System.Media.SoundPlayer(s);
player.Play();
I have the sound played, but it is not absolutely the one I added. Instead standard windows sound is played.
Any ideas?
Update
The issue is in getting the file from resources
System.Reflection.Assembly a = System.Reflection.Assembly.GetExecutingAssembly();
System.IO.Stream s = a.GetManifestResourceStream("SignInSound.wav");
Judging by the documentation, your resource stream is bad.
The Play method plays the sound using a new thread. If you call Play
before the .wav file has been loaded into memory, the .wav file will
be loaded before playback starts. You can use the LoadAsync or Load
method to load the .wav file to memory in advance. After a .wav file
is successfully loaded from a Stream or URL, future calls to playback
methods for the SoundPlayer will not need to reload the .wav file
until the path for the sound changes.
If the .wav file has not been
specified or it fails to load, the Play method will play the default
beep sound.
So the problem is that GetManifestResourceStream() is not doing what you think it's doing.
Solution (based on ResourceManager)
var thisType = this.GetType();
var assembly = thisType.Assembly;
var resourcePath = string.Format("{0}.{1}", assembly.GetName().Name, thisType.Name);
var resourceManager = new ResourceManager(resourcePath, assembly);
var resourceName = "SignInSound";
using ( Stream resourceStream = resourceManager.GetStream(resourceName) )
{
using ( SoundPlayer player = new SoundPlayer(resourceStream) )
{
player.PlaySync();
}
}
It seems that the System.Media.SoundPlayer class has a very limited amount of WAV formats it supports.
I have tried using the string path constructor, and it works with some .wav files, while it fails with others.
Here is some sample code. If you're using Windows 7, you can check it for yourself, just make a default new Windows Forms Application and add one button to it.
Notice how the code works for the "success" string, and throws an InvalidOperationException for the "fail" string.
namespace WindowsFormsApplication1
{
public partial class Form1 : Form
{
System.Media.SoundPlayer player;
public Form1()
{
InitializeComponent();
string success = #"C:\Windows\Media\Windows Battery Critical.wav";
string fail = #"C:\Windows\Media\Sonata\Windows Battery Critical.wav";
player = new System.Media.SoundPlayer(success);
}
private void button1_Click(object sender, EventArgs e)
{
player.Play();
}
}
}
Notice that the file under "success" has a bit rate of 1411 kbps, while the other one has 160 kbps.
Try your code with a WAV file with a bit rate of 1411 kbps and let us know how it works.
I'm trying to play raw pcm data delivered from ohLibSpotify c# library (https://github.com/openhome/ohLibSpotify).
I get the data in the following callback:
public void MusicDeliveryCallback(SpotifySession session, AudioFormat format, IntPtr frames, int num_frames)
{
//EXAMPLE DATA
//format.channels = 2, format.samplerate = 44100, format.sample_type = Int16NativeEndian
//frames = ?
//num_frames = 2048
}
Now i want to directly play the received data with NAudio (http://naudio.codeplex.com/). With the following code snippet i can play a mp3 file from disk. Is it possible to directly pass the data received from spotify to NAudio and play it in realtime?
using (var ms = File.OpenRead("test.pcm"))
using (var rdr = new Mp3FileReader(ms))
using (var wavStream = WaveFormatConversionStream.CreatePcmStream(rdr))
using (var baStream = new BlockAlignReductionStream(wavStream))
using (var waveOut = new WaveOut(WaveCallbackInfo.FunctionCallback()))
{
waveOut.Init(baStream);
waveOut.Play();
while (waveOut.PlaybackState == PlaybackState.Playing)
{
Thread.Sleep(100);
}
}
EDIT:
I updated my code. The program doesn't throw any errors, but i also can't hear music. Is anything wrong in my code?
This is the music delivery callback:
public void MusicDeliveryCallback(SpotifySession session, AudioFormat format, IntPtr frames, int num_frames)
{
//format.channels = 2, format.samplerate = 44100, format.sample_type = Int16NativeEndian
//frames = ?
//num_frames = 2048
byte[] frames_copy = new byte[num_frames];
Marshal.Copy(frames, frames_copy, 0, num_frames);
bufferedWaveProvider = new BufferedWaveProvider(new WaveFormat(format.sample_rate, format.channels));
bufferedWaveProvider.BufferDuration = TimeSpan.FromSeconds(40);
bufferedWaveProvider.AddSamples(frames_copy, 0, num_frames);
bufferedWaveProvider.Read(frames_copy, 0, num_frames);
if (_waveOutDeviceInitialized == false)
{
IWavePlayer waveOutDevice = new WaveOut();
waveOutDevice.Init(bufferedWaveProvider);
waveOutDevice.Play();
_waveOutDeviceInitialized = true;
}
}
And these are the overwritten callbacks in the SessionListener:
public override int MusicDelivery(SpotifySession session, AudioFormat format, IntPtr frames, int num_frames)
{
_sessionManager.MusicDeliveryCallback(session, format, frames, num_frames);
return base.MusicDelivery(session, format, frames, num_frames);
}
public override void GetAudioBufferStats(SpotifySession session, out AudioBufferStats stats)
{
stats.samples = 2048 / 2; //???
stats.stutter = 0; //???
}
I think you can do this:
Create a BufferedWaveProvider.
Pass this to waveOut.Init.
In your MusicDeliveryCallback, use Marshal.Copy to copy from the native buffer into a managed byte array.
Pass this managed byte array to AddSamples on your BufferedWaveProvider.
In your GetAudioBufferStats callback, use bufferedWaveProvider.BufferedBytes / 2 for "samples" and leave "stutters" as 0.
I think that will work. It involves some unnecessary copying and doesn't accurately keep track of stutters, but it's a good starting point. I think it might be a better (more efficient and reliable) solution to implement IWaveProvider and manage the buffering yourself.
I wrote the ohLibSpotify wrapper-library, but I don't work for the same company anymore, so I'm not involved in its development anymore. You might be able to get more help from someone on this forum: http://forum.openhome.org/forumdisplay.php?fid=6 So far as music delivery goes, ohLibSpotify aims to have as little overhead as possible. It doesn't copy the music data at all, it just passes you the same native pointer that the libspotify library itself provided, so that you can copy it yourself to its final destination and avoid an unnecessary layer of copying. It does make it a bit clunky for simple usage, though.
Good luck!
First, your code snippet shown above is more complicated than it needs to be. You only need five, instead of two using statments. Mp3FileReader decodes to PCM for you. Second, use WaveOutEvent in preference to WaveOut with function callbacks. It is much more reliable.
using (var rdr = new Mp3FileReader("test.pcm"))
using (var waveOut = new WaveOutEvent())
{
//...
}
To answer you actual question, you need to use a BufferedWaveProvider. You create one of these and pass it to your output device in the Init method. Now, as you receive audio, decompress it to PCM (if it is compressed) and put it into the BufferedWaveProvider. The NAudioDemo application includes examples of how to do this, so look at the NAudio source code to see how its done.