Mp3 frame decompress with NAudio - c#

I am going through Mp3StreamingDemo from NAudio Source Demo, and I need an explanation (nothing in depth, just a few sentences, to get a general idea) about decompressing the Mp3 frame.
The actual code is:
IMp3FrameDecompressor decompressor = null;
//...
if (decompressor == null)
{
WaveFormat waveFormat = new Mp3WaveFormat(frame.SampleRate, frame.ChannelMode == ChannelMode.Mono ? 1 : 2, frame.FrameLength, frame.BitRate);
//What does AcmMp3FrameDecompressor do?
decompressor = new AcmMp3FrameDecompressor(waveFormat);
this.bufferedWaveProvider = new BufferedWaveProvider(decompressor.OutputFormat);
}
int decompressed = decompressor.DecompressFrame(frame, buffer, 0);
I do have some knowledge about MP3, how does it look, about frames, etc. I just don't understand the process of mp3 frame decompression? Specifically:
what for is AcmMp3FrameDecompressor class used? What does DecompressFrame method do?
I can see the code from the class, but to understand it in depth I think I'll need much more knowledge about audio itself. And at the moment, as I said, I would appreciate just a description in general.
Thank you for your time and help.

AcmMp3FrameDecompressor decompresses an MP3 frame to PCM using the ACM codec on your computer. All desktop versions of Windows since Windows XP come with one, but there are some cases you might encounter where one is unavailable. NAudio also supplies a DMO based MP3 frame decoder, which can be used on Windows Vista and newer.

Related

bitmap stream to video in c# howto?

I have a code that receives a video stream through the webrtc library, which in its function shows them in a PictureBox, my question is .. how to pass that stream from the PictureBoxto a video on my computer?
public unsafe void OnRenderRemote(byte* yuv, uint w, uint h)
{
lock (pictureBoxRemote)
{
if (0 == encoderRemote.EncodeI420toBGR24(yuv, w, h, ref bgrBuffremote, true))
{
if (remoteImg == null)
{
var bufHandle = GCHandle.Alloc(bgrBuffremote, GCHandleType.Pinned);
remoteImg = new Bitmap((int)w, (int)h, (int)w * 3, PixelFormat.Format24bppRgb, bufHandle.AddrOfPinnedObject());
}
}
}
try
{
Invoke(renderRemote, this);
}
catch // don't throw on form exit
{
}
}
This code receives the stream through webrtc and converts it into images that are then shown in a PictureBoxcalling this function .. my question is:
How can I save an array or buffer of remoteImg images so I can write it to a video file on my pc?
Try doing something like this:
FileWriter.Open ("C:\\Users\\assa\\record.avi", (int) w, (int) h, (int) w * 3, VideoCodec.Default, 5000000);
FileWriter.WriteVideoFrame (remoteImg);
but only saves a single capture and not a video, is there any way to save the images of the stream with the OnRenderRemote function (described above) to be able to save them in a video?
OnRenderRemote only updates the PictureBox every time it is called, but I do not know how to save that flow in a video.
Thanks.
First: i do not know how the webrtc works exactly, but i can explain you how you must process the images to save them into a file.
Ok lets start: You currently have only full sized bitmaps of your own that are coming from the lib. That is just fine as long as you do not care about file size and you only want to show the "latest" frame. To store multiple frames into a file that we would call a "video" you need an encoder that processes those frames together.
Complicated things simple: An encoder takes 2 frames, call them Frame A and B and then compresses them in a way that only changes from Frame A to frame B are saved. This saves a lot of storage because in a video we only want to see "changes" aka movments from one frame to another. There are quite a lot of encoders out there but mostly you can see ffmpeg out there, its very popular and there are quite a lot c# wrappers for it so take a look.
Summery: to make 2-x images a "video" you have the process them with an encoder that processes the images in a format that can be played by a video player.

send array of bytes to System.Media.SoundPlayer in c#

I want send string byte to speaker something like this:
byte[] bt = {12,32,43,74,23,53,24,54,234,253,153};// example array
var ms = new MemoryStream(bt);
var sound = new System.Media.SoundPlayer();
sound.Stream = ms;
sound.Play();
but I get this exception:
my problem pic http://8pic.ir/images/g699b52xe5ap9s8yf0pz.jpg
The first bytes of a WAV stream contain info about length, etc.
You have to send this "WAV-Header" as well in the first few bytes.
See http://de.wikipedia.org/wiki/RIFF_WAVE
As you'll see its perfectly possible to compose these few bytes in the header and send them before your raw audio data,
You can use some library for reading data from microphone or playing it to speakers.
I worked successfuly with:
NAudio - http://naudio.codeplex.com/
I would not recommend building a WAV file yourself, it may be too much effort for this.
Note that this library (and probably some others, Bass - http://www.un4seen.com is also widely used) also have built in functionality for saving and reading WAV files.
NAudio is best app to play that functionality. use sample app provided.It may help.

How to append image on running video using C#

I want to add images on running video using c#.
My Code Is but not working
byte[] mainAudio = System.IO.File.ReadAllBytes(Server.MapPath(image path));//Upload by User
byte[] intreAudio = System.IO.File.ReadAllBytes(Server.MapPath(video path));//File Selected For Interruption
List<byte> int1 = new List<byte>(mainAudio);
int1.AddRange(intreAudio);
byte[] gg = int1.ToArray();
using (FileStream fs =
System.IO.File.Create(Server.MapPath(#"\TempBasicAudio\myfile1.mp3")))
{
if (gg != null)
{
fs.Write(gg, 0, gg.Length);
}
}
Did it ever occor to you that a video file is not justa mindless "array of images" so you can not just add another byte range at the end?
Depending on the video type there is a quite complex structure of management structured you just ignore. Videos are highly complex encoding.
YOu may have to add the images in a specific form WHILE UPDATING THE MANAGEMENT INFORMATION - or you may even have to transcode that (decode all images, then reencode the whole video stream).
Maybe a book about the basics of video processing is in order now? You are like the guy asking why you can not get more horsepower out of your car by running it on rocket fuel - totally ignoring the realities of how cars operate.

NAudio - Changing Bitrate of Recorded WAV file

i am trying to implement audio recording using NAudio to a Wav file, but the default bitrate set by the WasapiLoopbackCapture class can't be changed programmatically.
I am recording the audio output to a MemoryStream (recordedStream in snippet below). However the default bitrate set by the WasapiLoobpackCapture doesn't fit my needs.
I would like to have a bit rate of 320KBPS and i tried to convert the recorded file programmatically using the WaveFormatConversionStream class, but i couldn't make it work.
WaveFormat targetFormat = WaveFormat.CreateCustomFormat(waveIn.WaveFormat.Encoding,
waveIn.WaveFormat.SampleRate, //SampleRate
waveIn.WaveFormat.Channels, //Channels
320000, //Average Bytes per Second
waveIn.WaveFormat.BlockAlign, //Block Align
waveIn.WaveFormat.BitsPerSample); //Bits per Sample
using (WaveStream inputStream = new RawSourceWaveStream(recordedStream, waveIn.WaveFormat))
{
try
{
using (var converter = new WaveFormatConversionStream(targetFormat, inputStream))
{
// ...
}
}
catch (Exception)
{
throw;
}
recordedStream.Dispose();
}
I always get an "AcmNotPossible calling acmStreamOpen" conversion exception. As you see i am using exactly the same format as the recorded WAV file (Extension encoding, 44100 etc.), except the bitrate which is lower in the target waveformat.
What would be the correct codeto do the bitrate conversion from a Wav file contained in a MemoryStream? my goal is to get a 320KBPS file.
For a given sample rate, bit depth, and channel count, PCM audio always has the same bitrate (calculated by multiplying those three values together). If you want to reduce the bitrate, you must change one of those three (lowering the sample rate is probably the best option unless you can go from stereo to mono).
Really you should be thinking of encoding to a format like MP3, WMA or AAC, which will let you select your preferred bitrate.

how to improve my code to make better video quality?

I am using the following code to leverage Windows Media Encoder to record screen. I am using Windows Vista, screen resolution 1024 × 768, 32-bit. My issue is, the video could be recorded successfully, but when I playback the recorded video, the quality of video is not very good -- e.g. characters are very obscure. I am wondering what are the parameters I should try to tune to get better quality of recorder video?
My code,
static WMEncoder encoder = new WMEncoder();
IWMEncSourceGroup SrcGrp;
IWMEncSourceGroupCollection SrcGrpColl;
SrcGrpColl = encoder.SourceGroupCollection;
SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1");
IWMEncVideoSource2 SrcVid;
SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO);
SrcVid.SetInput("ScreenCap://ScreenCapture1", "", "");
IWMEncFile File = encoder.File;
File.LocalFileName = "C:\\OutputFile.avi";
// Choose a profile from the collection.
IWMEncProfileCollection ProColl = encoder.ProfileCollection;
IWMEncProfile Pro;
for (int i = 0; i < ProColl.Count; i++)
{
Pro = ProColl.Item(i);
if (Pro.Name == "Windows Media Video 8 for Local Area Network (384 Kbps)")
{
SrcGrp.set_Profile(Pro);
break;
}
}
encoder.Start();
thanks in advance,
George
Video encoders use a certain kbit/second ratio to limit the size of the generated stream. The fewer kbits/sec the less detail you will get due to fewer coefficients from the DCT and bigger quantization values. In other words: the more kbits/sec you put into the video the more detail can be stored in the stream by the encoder.
Judging by your code you have chosen a profile which uses 384 kbit/s which is not very much for a 1024*768 video. You should try other profiles or set bitrate you want yourself.

Categories