I want to change the bit rate of wave file.
so I searched in the net and I figure out that the wave file contain a header which is 44 bytes length , and the 25,26,27 and 28 byte are used to store the bit rate of wave file
so I take the wave and store it in an array of byte, then changes the value of bytes that used to store the bit rate of wave.
here is the code :
private int sampleRate;
private byte[] ByteArr;
private MemoryStream ByteMem;
ByteArr = null;
ByteMem = null;
ByteArr = File.ReadAllBytes(pathOfWav.Text);
sampleRate = BitConverter.ToInt32(ByteArr, 24) * 2;
Array.Copy(BitConverter.GetBytes(sampleRate), 0, ByteArr, 24, 4);
ByteMem = new MemoryStream(ByteArr);
here I stored the Wave file location on pathOfWav.Text which is a textBox, then I stored All the bytes of wave file in ByteArr then convert the 4 byte (from 25 to 28) to Int32 and multiply it by 2 to Increase the speed of speech and stored the value in sampleRate
after that I modify the previous ByteArr with the new value of Bit Rate sampleRate, then I instance a new MemoryStream .
my question is,, how to play the new Wave stream using Naudio ???
To change the bitrate of a WAV file you can't just update its format chunk. You actually have to re-encode it at a new sample-rate / bit-depth (assuming it is PCM), or with a different bitrate selected for your codec if it is not PCM. I have written an article here on converting between various audio formats, including converting between different flavours of PCM. The same article will also explain what to do if you meant changing the sample rate instead of bitrate.
Have you solved the issue ? As per your comment, if you need only to change the sampleRate, then why are you using NAudio ? You can use default available players such as MediaPlayer/SoundPlayer. If so, you can refer to the below code. I have added a method for changing the sample rate. Though you can write the waveFormat separately or append, I have only referred to sample rate and its dependent fields. I am reading the entire file, closing and then opening the same for writing part by part.
(Original Reference for 'WaveHeader Format' in C#: http://www.codeproject.com/Articles/15187/Concatenating-Wave-Files-Using-C-2005)
public void changeSampleRate(string waveFile, int sampleRate)
{
if (waveFile == null)
{
return;
}
/* you can add additional input validation code here */
/* open for reading */
FileStream fs = new FileStream(waveFile, FileMode.Open, FileAccess.Read);
/* get the channel and bits per sample value -> required for calculation */
BinaryReader br = new BinaryReader(fs);
int length = (int)fs.Length - 8;
fs.Position = 22;
short channels = br.ReadInt16();
fs.Position = 34;
short BitsPerSample = br.ReadInt16();
byte[] arrfile = new byte[fs.Length];
fs.Position = 0;
fs.Read(arrfile, 0, arrfile.Length); /* read entire file */
br.Close();
fs.Close();
/* now open for writing */
fs = new FileStream(waveFile, FileMode.Open, FileAccess.Write);
BinaryWriter bw = new BinaryWriter(fs);
bw.BaseStream.Seek(0, SeekOrigin.Begin);
bw.Write(arrfile, 0, 24); //no change till this point
/* refer to waveFormat header */
bw.Write(sampleRate);
bw.Write((int)(sampleRate * ((BitsPerSample * channels) / 8)));
bw.Write((short)((BitsPerSample * channels) / 8));
/* you can keep the same data from here */
bw.Write(arrfile, 34, arrfile.Length - 34);
bw.Close();
fs.Close();
}
Now you can call the above method and play the wave file with different sample rates:
changeSampleRate(yourWaveFileToPlay, requiredSampleRate);
MediaPlayer mp = new MediaPlayer();
mp.Open(new Uri(yourWaveFileToPlay, UriKind.Absolute));
mp.Play();
Related
I am working with mediaframes (Kinect) to get colour, Depth/Infrared frames on UWP in realtime. This is to store frame data on disk and later process it.
For colour, I get pixels in bytes by using Memorystream.
// Get the Individual color Frame
var vidFrame = clrFrame?.VideoMediaFrame;
{
if (vidFrame == null) return;
// create a UWP SoftwareBitmap and copy Color Frame into Bitmap
SoftwareBitmap sbt = new SoftwareBitmap(vidFrame.SoftwareBitmap.BitmapPixelFormat, vidFrame.SoftwareBitmap.PixelWidth, vidFrame.SoftwareBitmap.PixelHeight);
vidFrame.SoftwareBitmap.CopyTo(sbt);
// PixelFormat needs to be in 8bit for Colour only
if (sbt.BitmapPixelFormat != BitmapPixelFormat.Bgra8)
sbt = SoftwareBitmap.Convert(vidFrame.SoftwareBitmap, BitmapPixelFormat.Bgra8);
if (source != null)
{
var ignore = Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, async () =>
{
extBitmap = new WriteableBitmap(sbt.PixelWidth, sbt.PixelHeight);
sbt.CopyToBuffer(extBitmap.PixelBuffer);
byte[] pixels = PixelBufferToWriteableBitmap(extBitmap);
extBitmap.Invalidate();
await SavePixelsToFile(pixels);
});
}
}
public async Task<byte[]> PixelBufferToWriteableBitmap(WriteableBitmap wb)
{
using (Stream stream = wb.PixelBuffer.AsStream())
{
using (MemoryStream memoryStream = new MemoryStream())
{
await stream.CopyToAsync(memoryStream);
byte[] pixels = memoryStream.ToArray();
return pixels;
}
}
}
The Infrared pixelformat is Gray16 (in SoftwareBitmap); I want to keep the raw pixel data (so no data is lost from the frame) and write it to the localfolder in ushort[] array.
Following are the links, I came across on how to get set pixel from software bitmap. However, it is bgra to byte and I want to convert software bitmap into ushort.
How to set/get pixel from Softwarebitmap
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/imaging
I am new at this and not sure how to proceed, with this.
Can someone please help?
EDIT
I figured that a buffer mediaframe can be converted to byte array by doing the following:
public async Task<byte[]> BufferStreamTobyte(BufferMediaFrame buffFrame)
{
using (Stream stream = buffFrame.Buffer.AsStream())
{
using (MemoryStream memoryStream = new MemoryStream())
{
await stream.CopyToAsync(memoryStream);
byte[] pixels = memoryStream.ToArray();
return pixels;
}
}
}
But I am not sure if I loose information of Infrared frame by doing this. Since Infrared and Depth are 16 bits per pixel and this current byte conversion holds 8 bpp. For this ushort[] would be able to hold 16bpp.
I very new to this and not sure so I hope I have gotten this right?
EDIT 2:
I have got the pixel data in byte[].
I understand that byte is 8 bits and short is 16 bits so I changed the length of the arrays:
int width = softwareBitmap.PixelWidth;
int height = softwareBitmap.PixelHeight;
int length = width * height * 2;
byte[] irbyteData = new byte[length]; // *2 to get 16 bit
var irshortData = new ushort[width * height]; // 16 bit ushort
IntPtr ptr = (IntPtr)pixelBytesAddress;
Marshal.Copy(ptr, irbyteData, 0, length); //seems to successfully get pixels from memory but not sure if this is lossless
I would try using PNGs
ex:
using(var fs=new FileStream("sample.png"))
{
BitmapSource bmpSource=BitmapSource(Width,Height,96,96,PixelFormats.Gray16,
BitmapPalettes.Gray16,buffer,ImageWidth*sizeof(ushort));
PngBitmapEncoder enc = new PngBitmapEncoder();
end.Frames.Add(BitmapFrame.Create(bmpSource));
enc.Save(fs)
}
Sorry if there's a typo in a code I write from memory on a computer without IDE
Add System.Windows.media.Imaging to your usings
buffer is a ushort array with the pixel values (should be in total length: width*height*2)
I found a solution to copy IntPtr data to ushort array through the following link:
Copy from IntPtr (16 bit) array to managed ushort
I get the address IntPtr by
using (var input = softwareBitmap.LockBuffer(BitmapBufferAccessMode.ReadWrite))
using (var inputReference = input.CreateReference())
((IMemoryBufferByteAccess)inputReference).GetBuffer(out inputBytes, out inputCapacity);
IntPtr ptr = (IntPtr)inputBytes;
Marshal.Copy(ptr, infraredbyteData, 0, length);
I get the bytes with length( width *height *2 ) to hold 16 bit data.
Later I convert it to ushort by
var size = infraredbyteData.Length / 2;
ushort[] output = new ushort[size]; // every ushort is 2 bytes
Buffer.BlockCopy(infraredbyteData, 0, output, 0, infraredbyteData.Length);
This seems to work!
I am using C# WPF to make a real-time FFT.
I am using NAudio's WaveIn and BufferedWaveProvider to capture any sound recorded by Stereo Mix. I take the FFT of the buffer many times per second and display it using a bitmap so that the display shows a real-time fourier transform of any audio playing through the speakers.
My problem is that, as expected, the displayed FFT lags behind the audio coming from the speakers by a small amount (maybe 200 ms).
Is there any way I can record the current audio that is supposed to be playing from the speakers so that I can perform the FFT on it and then play it back a small amount of time later (ex. 200 ms) while muting the original real-time audio.
The end result would simply be to effectively remove the perceived delay from the displayed FFT. Audio from a youtube video, for example, would lag slightly behind the video while my program is running.
Here are the relevant methods from what I have right now:
public MainWindow()
{
sampleSize = (int)Math.Pow(2, 13);
BUFFERSIZE = sampleSize * 2;
InitializeComponent();
// get the WaveIn class started
WaveIn wi = new WaveIn();
wi.DeviceNumber = deviceNo;
wi.WaveFormat = new NAudio.Wave.WaveFormat(RATE, WaveIn.GetCapabilities(wi.DeviceNumber).Channels);
// create a wave buffer and start the recording
wi.DataAvailable += new EventHandler<WaveInEventArgs>(wi_DataAvailable);
bwp = new BufferedWaveProvider(wi.WaveFormat);
bwp.BufferLength = BUFFERSIZE; //each sample is 2 bytes
bwp.DiscardOnBufferOverflow = true;
wi.StartRecording();
}
public void UpdateFFT()
{
// read the bytes from the stream
byte[] buffer = new byte[BUFFERSIZE];
bwp.Read(buffer, 0, BUFFERSIZE);
if (buffer[BUFFERSIZE - 2] == 0) return;
Int32[] vals = new Int32[sampleSize];
Ys = new double[sampleSize];
for (int i = 0; i < vals.Length; i++)
{
// bit shift the byte buffer into the right variable format
byte hByte = buffer[i * 2 + 1];
byte lByte = buffer[i * 2 + 0];
vals[i] = (short)((hByte << 8) | lByte);
Ys[i] = vals[i];
}
FFT(Ys);
}
I am still new to audio processing - any help would be appreciated.
The cause of your delay is the latency of WaveIn which is about 200ms by default. You can reduce that but at the risk of dropouts.
Whilst you can capture audio being played by the system with WasapiCapture there is no way to change it with NAudio, or delay its playback.
I search for my problem solution in SO and I don't find ( maybe it is not a problem and it works fine just I'm to dump to get it )
I have BMP file that i try to convert into bitmap array. Everything is fine, but i get an output file that looks weird. The file is 16x32, so i should get 512 bit. The final image is black and white, so i should have 512 x 3 ( 3 color bit ) pixels - 1536 pixels with value 0 or 255, but a get 1590 pixels. This 54 pixels have different value than 0 or 255 why ? What is that value and for what bmp file use it ?
Code: `
long time = 0;
Stopwatch watch = new Stopwatch();
watch.Start();
Image img = Image.FromFile("test.png");
byte[] data;
MemoryStream ms = new MemoryStream();
img.Save(ms, ImageFormat.Bmp);
data = ms.ToArray();
watch.Stop();
time = watch.ElapsedTicks;
Console.WriteLine(time);
FileStream file = new FileStream("test.txt", FileMode.Create, FileAccess.ReadWrite);
StreamWriter writer = new StreamWriter(file);
foreach (byte b in data)
{
writer.WriteLine(b);
}
writer.Close();
file.Close();
Console.ReadKey();`
Code is not nice to read i know but is only for some test
I would say its the fileheader, like TaW already pointed out. According to this website http://www.fastgraph.com/help/bmp_header_format.html the BMP header size is 54 bytes. If you look at offset 18 and 22, you should see the width and height (16, 32) of your picture.
I'm having some trouble converting an image to a video using the SharpAVI.dll.
I have managed to produce a video file using a randomly generated byte array by using the documentation on SharpAVI's website:
Getting Started with SharpAVI
So the next step I thought I would take was to take an Image, create a Bitmap image, convert the bitmap to a byte array and then simply save the byte array to each frame of the video file. When I run the program, I get no errors or anything and a video file of an appropriate file size is produced however the video file is unreadable and will not open. I'm really struggling to see why this won't work. Any help would be greatly appreciated!
My Code:
private void GenerateSingleImageVideo()
{
string imagePath = textBoxImagePath.Text;
Bitmap thisBitmap;
//generate bitmap from image file
using (Stream BitmapStream = System.IO.File.Open(imagePath, FileMode.Open))
{
Image img = Image.FromStream(BitmapStream);
thisBitmap = new Bitmap(img);
}
//convert the bitmap to a byte array
byte[] byteArray = BitmapToByteArray(thisBitmap);
//creates the writer of the file (to save the video)
var writer = new AviWriter(textBoxFileName.Text + ".avi")
{
FramesPerSecond = int.Parse(textBoxFrameRate.Text),
EmitIndex1 = true
};
var stream = writer.AddVideoStream();
stream.Width = thisBitmap.Width;
stream.Height = thisBitmap.Height;
stream.Codec = KnownFourCCs.Codecs.Uncompressed;
stream.BitsPerPixel = BitsPerPixel.Bpp32;
int numberOfFrames = ((int.Parse(textBoxFrameRate.Text)) * (int.Parse(textBoxVideoLength.Text)));
int count = 0;
while (count <= numberOfFrames)
{
stream.WriteFrame(true, byteArray, 0, byteArray.Length);
count++;
}
writer.Close();
MessageBox.Show("Done");
}
private byte[] BitmapToByteArray(Bitmap img)
{
ImageConverter converter = new ImageConverter();
return (byte[])converter.ConvertTo(img, typeof(byte[]));
}
You're wrong in assuming that you should pass a Bitmap object to WriteFrame method. It expects pixel data in bottom to top 32bpp format. See example in
// Buffer for pixel data
var buffer = new byte[width * height * 4];
...
// Copy pixels from Bitmap assuming it has expected 32bpp pixel format
var bits = bitmap.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadOnly, PixelFormat.Format32bppRgb);
Marshal.Copy(bits.Scan0, buffer, 0, buffer.Length);
bitmap.UnlockBits(bits);
You can see code of a sample app as a reference
https://github.com/baSSiLL/SharpAvi/blob/master/Sample/Recorder.cs
I am writing a library to interface C# with the EPL2 printer language. One feature I would like to try to implement is printing images, the specification doc says
p1 = Width of graphic Width of graphic in bytes. Eight (8) dots = one (1) byte of data.
p2 = Length of graphic Length of graphic in dots (or print lines)
Data = Raw binary data without graphic file formatting. Data must be in bytes. Multiply the width in bytes (p1) by the number of print lines (p2) for the total amount of graphic data. The printer automatically calculates the exact size of the data block based upon this formula.
I plan on my source image being a 1 bit per pixel bmp file, already scaled to size. I just don't know how to get it from that format in to a byte[] for me to send off to the printer. I tried ImageConverter.ConvertTo(Object, Type) it succeeds but the array it outputs is not the correct size and the documentation is very lacking on how the output is formatted.
My current test code.
Bitmap i = (Bitmap)Bitmap.FromFile("test.bmp");
ImageConverter ic = new ImageConverter();
byte[] b = (byte[])ic.ConvertTo(i, typeof(byte[]));
Any help is greatly appreciated even if it is in a totally different direction.
If you just need to convert your bitmap into a byte array, try using a MemoryStream:
Check out this link: C# Image to Byte Array and Byte Array to Image Converter Class
public byte[] imageToByteArray(System.Drawing.Image imageIn)
{
MemoryStream ms = new MemoryStream();
imageIn.Save(ms,System.Drawing.Imaging.ImageFormat.Gif);
return ms.ToArray();
}
As SLaks said I needed to use LockBits
Rectangle rect = new Rectangle(0, 0, Bitmap.Width, Bitmap.Height);
System.Drawing.Imaging.BitmapData bmpData = null;
byte[] bitVaues = null;
int stride = 0;
try
{
bmpData = Bitmap.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadOnly, Bitmap.PixelFormat);
IntPtr ptr = bmpData.Scan0;
stride = bmpData.Stride;
int bytes = bmpData.Stride * Bitmap.Height;
bitVaues = new byte[bytes];
System.Runtime.InteropServices.Marshal.Copy(ptr, bitVaues, 0, bytes);
}
finally
{
if (bmpData != null)
Bitmap.UnlockBits(bmpData);
}