In my app I am recording voice using AudioRecorder as given in the following site, Audio Recorder it is working but it produce large size WAV file.
For example : If I record audio for 1 minute it takes 4MB to 5MB. So that I want to convert the wave file into MP3 file to reduce the size of the file. Please help me to compress the wav file ,give some example. Thanks in advance.
I never tried converting files before so i looked up on
some threads that might be helpful to you.
One is converting wav to mp3 which require file conversion into a byte[]
public byte[] ConvertToMp3(Uri uri)
{
using (var client = new WebClient())
{
var file = client.DownloadData(uri);
var target = new WaveFormat(8000, 16, 1);
using (var outPutStream = new MemoryStream())
using (var waveStream = new WaveFileReader(new MemoryStream(file)))
using (var conversionStream = new WaveFormatConversionStream(target, waveStream))
using (var writer = new LameMP3FileWriter(outPutStream, conversionStream.WaveFormat, 32, null))
{
conversionStream.CopyTo(writer);
return outPutStream.ToArray();
}
}
}
however on this method he is using a third party service which downloads the
wav file and then to be called on that method but this does not guaranty if the file size will be reduced.
however i have check that you can compress wav files using a library called zlib.
just decompress it whenever u need it.
Please check the link below:
How to convert wav file to mp3 in memory?
Reducing WAV sound file size, without losing quality
Related
Before applying any sound modification (using the sample frames), I'm trying to simply read a Wav file, and write out an identical one using the contents:
using (WaveFileReader reader = new WaveFileReader(#"input.wav"))
{
using (WaveFileWriter writer = new WaveFileWriter(#"output.wav", reader.WaveFormat))
{
while (true)
{
var frame = reader.ReadNextSampleFrame();
if (frame == null)
break;
writer.WriteSample(frame[0]);
}
}
}
I'd expect this to write an identical Wav file. But it actually writes a (surprisingly larger file) with a very different sound (as if it's been supressed).
Any ideas?
It's because you are converting to floating point samples, and back to whatever the source format was, and paying no attention to the number of channels. To do what you want you should just use the Read method to read into a byte array and write the same data into the writer.
I'm developing a WPF application, where I have to play audio. I receive the audio data in .mp4 format (in a byte array) and the only restriction is that I can't write it out to the hard disk.
I found couple of solutions for playing the .mp4 format, for example with WMPLib.WindowsMediaPlayer, but I can't give a byte array, or stream to this library, to play the audio. It just accepts the file path.
Then I found the System.Media.SoundPlayer, which can play audio from a stream, but just in .wav format. I started to search for solutions to convert from mp4 to wav. I found the NAudio library and I could make the conversion the following way:
using (var data = new MediaFoundationReader(filePath)) {
var stream = new MemoryStream();
WaveFileWriter.WriteWavFileToStream(stream, data);
}
The problem with this is that I can instantiate the MediaFoundationReader just with a file path parameter. I didn't find any way to create it without using files. I think this was also a dead end.
So, any suggestion would be helpful about how can I convert audio in memory, or maybe how can I play directly the .mp4 file from a byte array or stream?
You can convert any audio formats witn NAudio
See samples like :How to convert a MP3 file to WAV with NAudio in WinForms C#
with several methods like MediaFoundationReader
Finally I found a solution which converts to an .mp3 format, but it can also convert to .wav. I could use the uwp transcode API the following way:
public static async void ConvertMp4ToMp3(byte[] mp4Data, Action<Stream> doneCallback) {
MediaEncodingProfile profile = MediaEncodingProfile.CreateMp3(AudioEncodingQuality.High);
var inputStream = new MemoryRandomAccessStream(mp4Data);
var outputStream = new InMemoryRandomAccessStream();
MediaTranscoder transcoder = new MediaTranscoder();
PrepareTranscodeResult prepareOperation = await transcoder.PrepareStreamTranscodeAsync(inputStream, outputStream, profile);
if (prepareOperation.CanTranscode) {
//start to convert
var transcodeOperation = prepareOperation.TranscodeAsync();
//registers completed event handler
transcodeOperation.Completed += (IAsyncActionWithProgress<double> asyncInfo, AsyncStatus status) => {
asyncInfo.GetResults();
var stream = outputStream.AsStream();
stream.Position = 0;
doneCallback(stream);
};
} else {
doneCallback(null);
}
}
The imports:
using System;
using System.IO;
using Windows.Foundation;
using Windows.Media.MediaProperties;
using Windows.Media.Transcoding;
using Windows.Storage.Streams;
And the MemoryRandomAccessStream is just an implementation of the IRandomAccesStream interface and can be found here.
I'm trying to convert a mp3 into a wave stream using NAudio. Unfortunately I receive the error Not a WAVE file - no RIFF header on the third line when creating the wave reader.
var mp3Reader = new Mp3FileReader(mp3FileLocation);
var pcmStream = WaveFormatConversionStream.CreatePcmStream(mp3Reader);
var waveReader = new WaveFileReader(pcmStream)
Shouldn't these streams work together properly? My goal is to combine several mp3s and wavs into a single stream for both playing and saving to disc( as a wav).
I'm going to preface this with a note that I've never used NAudio. Having said that, there's a guide to concatenating audio on their Github site.
Having looked at the API, you can't use Mp3FileReader directly as it doesn't implement ISampleProvider. However, you can use AudioFileReader instead.
Assuming you have an IEnumerable<string> (aka List or array) of the filenames you want to join named files:
var sampleList = new List<ISampleProvider>();
foreach(string file in files)
{
sampleList.add(new AudioFileReader(file));
}
WaveFileWriter.CreateWaveFile16("outfilenamegoeshere.wav", new ConcatenatingSampleProvider(sampleList));
you don't need the second and third lines. Mp3FileReader will convert to PCM for you and you can play it directly with a player like WaveOutEvent. To actually produce a WAV file on disk, pass it into WaveFileWriter.CreateWaveFile
using(var reader = new Mp3FileReader(mp3FileLocation))
{
WaveFileWriter.CreateWaveFile(reader);
}
I want to resample audio byte array from 8Khz to 48Khz. The audio stream is obtained as a byte[] from a network socket.
Reading Mark Heath's Blog about resampling using NAudio, I came across the following code
int outRate = 16000;
var inFile = #"test.mp3";
var outFile = #"test resampled WDL.wav";
using (var reader = new AudioFileReader(inFile))
{
var resampler = new WdlResamplingSampleProvider(reader, outRate);
WaveFileWriter.CreateWaveFile16(outFile, resampler);
}
But this code acts on a file stream (AudioFileReader) rather than in memory data (byte[]). How could I modify this code to up-sample my byte array?
Edit: Basically I want to up-sample the 8 KHz data obtained from a network peer to 48 KHz and play using WASAPI.
Your input to the resampler could be a BufferedWaveProvider or a RawSourceWaveStream. You can't use CreateWaveFile16 to resample in real-time though. You'd need to read only the amount of audio you expect to be available and write it to the WAV file.
Iam trying to merge two .wav files into another file. but I could see only first file's data in the created file.
but the newly created file occupies the space which is equals the sum of the size of the source files.
foreach (string sourceFile in fileNamesList)
{
FileStream file= File.Open(sourceFile, FileMode.Open);
FileStream outFile = File.Open(output, FileMode.Append,FileAccess.Write);
byte[] buffer = new byte[file.Length];
int read;
if ((read=file.Read(buffer, 0, (int)file.Length))>0)
{
outFile.Write(buffer, 0, read);
}
file.Close();
file.Dispose();
outFile.Close();
outFile.Dispose();
}
thanks
You can't just concatenate two WAV files because they have a header which defines the format, number of channels, sample rate, length etc.
You will need to read and parse the header file for each separate WAV file and then write a new header to a new file with the correct data and then append the data contents from each WAV file.
You will not easily be able to concatenate two WAV files which have different sample rates or number of channels, but otherwise it's not too hard (once you've worked out the header format).
See here for details about the header format:
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
http://en.wikipedia.org/wiki/WAV
Perhaps the easiest way will to be to use a third party tool such as Naudio to do this, as described here:
How to join 2 or more .WAV files together programatically?