Reading a MemoryStream containing opus audio with NAudio - c#

I'm trying to play opus audio files from web which I try to buffer with a MemoryStream. I'm aware of NAudio's ability to take urls however I need to set cookies and user agent before I access the file so this eliminates that option.
My latest approach was to buffer 30~ seconds of stream, feed it to StreamMediaFoundationReader and write to the same MemoryStream when needed, however NAudio ends up playing the initial buffered segment and stop after that is completed. What would be the correct approach for this?
Here's my current code if needed. (I have no idea how audio streaming works so please go easy on me)
bufstr.setReadStream(httpreq.GetResponse().GetResponseStream()); //bufstr is a custom class which creates a memorystream I can write to.
StreamMediaFoundationReader streamread = new StreamMediaFoundationReader(bufstr.getStream());
bufstr.readToStream(); //get the initial 30~ seconds of content
waveOut.Init(streamread);
waveOut.Play();
int seconds = 0;
while(waveOut.PlaybackState == PlaybackState.Playing) {
Thread.Sleep(1000);
seconds++;
if (secs % 30 > 15) bufstr.readToStream();
}
bufstr's readToStream method
public void readToStream() {
int prevbufcount = totalbuffered; //I keep track of how many bytes I fetched from remote url.
while (stream.CanRead && prevbufcount + (30 * (this.bitrate / 8)) > totalbuffered && totalbuffered != contentlength) { //read around 30 seconds of content;
Console.Write($"Caching {prevbufcount + (30 * (this.bitrate / 8))}/{totalbuffered}\r");
byte[] buf = new byte[1024];
bufferedcount = stream.Read(buf, 0, 1024);
totalbuffered += bufferedcount;
memorystream.Write(buf, 0, bufferedcount);
}
}

While debugging I found out that content length I get from server does not match with the actual size of stream, so I ended up calculating content length via other details I get from server.
The issue might also be a race condition due to the fact that it stopped after I manually kept track of where I write on memory stream

Related

Resampling raw audio with NAudio

I'd like to resample audio: change its sample rate from 44k to 11k. The input I've got is raw audio in bytes. It really is raw, it has no headers - if I try loading it into a WaveFileReader, I get an exception saying "Not a WAVE file - no RIFF header".
How I'm currently trying to achieve it is something like this (just a really simplified piece of code):
WaveFormat ResampleInputFormat = new WaveFormat(44100, 1);
WaveFormat ResampleOutputFormat = new WaveFormat(11025, 1);
MemoryStream ResampleInputMemoryStream = new MemoryStream();
foreach (var b in InputListOfBytes)
{
ResampleInputMemoryStream.Write(new byte[]{b}, 0, 1);
}
RawSourceWaveStream ResampleInputWaveStream =
new RawSourceWaveStream(ResampleInputMemoryStream, ResampleInputFormat);
WaveFormatConversionStream ResampleOutputStream =
new WaveFormatConversionStream(ResampleOutputFormat, ResampleInputWaveStream);
byte[] bytes = new byte[2];
while (ResampleOutputStream.Read(bytes, 0, 2) > 0)
{
OutputListOfBytes.Add(bytes[0]);
OutputListOfBytes.Add(bytes[1]);
}
My problem is: the last loop is an infinite loop. The Read() always gets the same values and never advances in the Stream. I even tried Seek()-ing to the right position after each Read(), but that doesn't seem to work either, I still always get the same values.
What am I doing wrong? And is this even the right way to resample raw audio? Thanks in advance!
First, you need to reset ResampleInputMemoryStream's position to the start. It may actually have been easier to create the memory stream based on the array:
new MemoryStream(InputListOfBytes)
Second, when reading out of the resampler, you need to read in larger chunks than two bytes at a time. Try at least a second's worth of audio (use ResampleOutputStream.WaveFormat.AverageBytesPerSecond).

File Chunking Performance in C#

I am trying to empower users to upload large files. Before I upload a file, I want to chunk it up. Each chunk needs to be a C# object. The reason why is for logging purposes. Its a long story, but I need to create actual C# objects that represent each file chunk. Regardless, I'm trying the following approach:
public static List<FileChunk> GetAllForFile(byte[] fileBytes)
{
List<FileChunk> chunks = new List<FileChunk>();
if (fileBytes.Length > 0)
{
FileChunk chunk = new FileChunk();
for (int i = 0; i < (fileBytes.Length / 512); i++)
{
chunk.Number = (i + 1);
chunk.Offset = (i * 512);
chunk.Bytes = fileBytes.Skip(chunk.Offset).Take(512).ToArray();
chunks.Add(chunk);
chunk = new FileChunk();
}
}
return chunks;
}
Unfortunately, this approach seems to be incredibly slow. Does anyone know how I can improve the performance while still creating objects for each chunk?
thank you
I suspect this is going to hurt a little:
chunk.Bytes = fileBytes.Skip(chunk.Offset).Take(512).ToArray();
Try this instead:
byte buffer = new byte[512];
Buffer.BlockCopy(fileBytes, chunk.Offset, buffer, 0, 512);
chunk.Bytes = buffer;
(Code not tested)
And the reason why this code would likely be slow is because Skip doesn't do anything special for arrays (though it could). This means that every pass through your loop is iterating the first 512*n items in the array, which results in O(n^2) performance, where you should just be seeing O(n).
Try something like this (untested code):
public static List<FileChunk> GetAllForFile(string fileName, FileMode.Open)
{
var chunks = new List<FileChunk>();
using (FileStream stream = new FileStream(fileName))
{
int i = 0;
while (stream.Position <= stream.Length)
{
var chunk = new FileChunk();
chunk.Number = (i);
chunk.Offset = (i * 512);
Stream.Read(chunk.Bytes, 0, 512);
chunks.Add(chunk);
i++;
}
}
return chunks;
}
The above code skips several steps in your process, preferring to read the bytes from the file directly.
Note that, if the file is not an even multiple of 512, the last chunk will contain less than 512 bytes.
Same as Robert Harvey's answer, but using a BinaryReader, that way I don't need to specify an offset. If you use a BinaryWriter on the other end to reassemble the file, you won't need the Offset member of FileChunk.
public static List<FileChunk> GetAllForFile(string fileName) {
var chunks = new List<FileChunk>();
using (FileStream stream = new FileStream(fileName)) {
BinaryReader reader = new BinaryReader(stream);
int i = 0;
bool eof = false;
while (!eof) {
var chunk = new FileChunk();
chunk.Number = i;
chunk.Offset = (i * 512);
chunk.Bytes = reader.ReadBytes(512);
chunks.Add(chunk);
i++;
if (chunk.Bytes.Length < 512) { eof = true; }
}
}
return chunks;
}
Have you thought about what you're going to do to compensate for packet loss and data corruption?
Since you mentioned that the load is taking a long time then I would use asynchronous file reading in order to speed up the loading process. The hard disk is the slowest component of a computer. Google does asynchronous reads and writes on Google Chrome to improve their load times. I had to do something like this in C# in a previous job.
The idea would be to spawn several asynchronous requests over different parts of the file. Then when a request comes in, take the byte array and create your FileChunk objects taking 512 bytes at a time. There are several benefits to this:
If you have this run in a separate thread, then you won't have the whole program waiting to load the large file you have.
You can process a byte array, creating FileChunk objects, while the hard disk is still trying to for-fill read request on other parts of the file.
You will save on RAM space if you limit the amount of pending read requests you can have. This allows less page faulting to the hard disk and use the RAM and CPU cache more efficiently, which speeds up processing further.
You would want to use the following methods in the FileStream class.
[HostProtectionAttribute(SecurityAction.LinkDemand, ExternalThreading = true)]
public virtual IAsyncResult BeginRead(
byte[] buffer,
int offset,
int count,
AsyncCallback callback,
Object state
)
public virtual int EndRead(
IAsyncResult asyncResult
)
Also this is what you will get in the asyncResult:
// Extract the FileStream (state) out of the IAsyncResult object
FileStream fs = (FileStream) ar.AsyncState;
// Get the result
Int32 bytesRead = fs.EndRead(ar);
Here is some reference material for you to read.
This is a code sample of working with Asynchronous File I/O Models.
This is a MS documentation reference for Asynchronous File I/O.

Use NAudio to get Ulaw samples for RTP

I've been looking over the NAudio examples trying to work out how I can get ulaw samples suitable for packaging up as an RTP payload. I'm attempting to generate the samples from an mp3 file using the code below. Not surprisingly, since I don't really have a clue what I'm doing with NAudio, when I transmit the samples across the network to a softphone all I get is static.
Can anyone provide any direction on how I should be getting 160 bytes (8Khz # 20ms) ULAW samples from an MP3 file using NAudio?
private void GetAudioSamples()
{
var pcmStream = WaveFormatConversionStream.CreatePcmStream(new Mp3FileReader("whitelight.mp3"));
byte[] buffer = new byte[2];
byte[] sampleBuffer = new byte[160];
int sampleIndex = 0;
int bytesRead = pcmStream.Read(buffer, 0, 2);
while (bytesRead > 0)
{
var ulawByte = MuLawEncoder.LinearToMuLawSample(BitConverter.ToInt16(buffer, 0));
sampleBuffer[sampleIndex++] = ulawByte;
if (sampleIndex == 160)
{
m_rtpChannel.AddSample(sampleBuffer);
sampleBuffer = new byte[160];
sampleIndex = 0;
}
bytesRead = pcmStream.Read(buffer, 0, 2);
}
logger.Debug("Finished adding audio samples.");
}
Here's a few pointers. First of all, as long as you are using NAudio 1.5, no need for the additional WaveFormatConversionStream - Mp3FileReader's Read method returns PCM.
However, you will not be getting 8kHz out, so you need to resample it first. WaveFormatConversionStream can do this, although it uses the built-in Windows ACM sample rate conversion, which doesn't seem to filter the incoming audio well, so there could be aliasing artefacts.
Also, you usually read bigger blocks than just two bytes at a time as the MP3 decoder needs to decode frames one at a time (the resampler also will want to deal with bigger block sizes). I would try reading at least 20ms worth of bytes at a time.
Your use of BitConverter.ToInt16 is correct for getting the 16 bit sample value, but bear in mind that an MP3 is likely stereo, with left, right samples. Are you sure your phone expects stereo.
Finally, I recommend making a mu-law WAV file as a first step, using WaveFileWriter. Then you can easily listen to it in Windows Media Player and check that what you are sending to your softphone is what you intended.
Below is the way I eventually got it working. I do lose one of the channels from the mp3, and I guess there's some way to combine the channels as part of a conversion, but that doesn't matter for my situation.
The 160 byte buffer size gives me 20ms ulaw samples which work perfectly with the SIP softphone I'm testing with.
var pcmFormat = new WaveFormat(8000, 16, 1);
var ulawFormat = WaveFormat.CreateMuLawFormat(8000, 1);
using (WaveFormatConversionStream pcmStm = new WaveFormatConversionStream(pcmFormat, new Mp3FileReader("whitelight.mp3")))
{
using (WaveFormatConversionStream ulawStm = new WaveFormatConversionStream(ulawFormat, pcmStm))
{
byte[] buffer = new byte[160];
int bytesRead = ulawStm.Read(buffer, 0, 160);
while (bytesRead > 0)
{
byte[] sample = new byte[bytesRead];
Array.Copy(buffer, sample, bytesRead);
m_rtpChannel.AddSample(sample);
bytesRead = ulawStm.Read(buffer, 0, 160);
}
}
}

A problem with segmented file write

I download a file's parts (a picture), and then I want to save these parts into one file.
The problem is, that the first part is being downloaded and saved properly (I can see the part of that pricture). But, when the second part is saved (FileMode.Append) the picture seems to be broken.
Here's the code:
HttpWebRequest webRequest;
HttpWebResponse webResponse;
Stream responseStream;
long StartPosition, EndPosition;
if (File.Exists(LocalPath))
fileStream = new FileStream(LocalPath, FileMode.Append);
else fileStream = new FileStream(LocalPath, FileMode.Create);
webRequest = (HttpWebRequest)WebRequest.Create(FileURL);
webResponse = (HttpWebResponse)webRequest.GetResponse();
responseStream = webResponse.GetResponseStream();
StartPosition = 0; //download first 52062 bytes of the file
EndPosition = 52061;
webRequest.AddRange(StartPosition, EndPosition);
int SeekPosition = (int)StartPosition;
while ((bytesSize = responseStream.Read(Buffer, 0, Buffer.Length)) > 0)
{
lock (fileStream)
{
fileStream.Seek(SeekPosition, SeekOrigin.Begin);
fileStream.Write(Buffer,0, bytesSize);
}
//the Buffer.Length is 2048.
//When the bytes count to download is < 2048 then I decrease the Buffer.Length
//to prevent downloading more that 52062 bytes.
DownloadedBytesCount += bytesSize;
SeekPosition += bytesSize;
long TotalToDownload = EndPosition - StartPosition;
long bytesLeft = TotalToDownload - DownloadedBytesCount;
if (bytesLeft < Buffer.Length)
Buffer = new byte[bytesLeft];
}
WHen I want to download the second part of the file I set
StartPosition = 52062;
EndPosition = 104122;
and then there is a problem that I described above. Why the file is not appened properly ?
You don't need StartPosition, fileStream.Seek() and Buffer = new byte[bytesLeft];
Also the lock() shouldn't be necessary (if it is you've got a lot more troubles).
So remove all that because the chances are you got some of it wrong.
And if it then still doesn't work, edit the question and provide more information. There is quite a lot missing right now:
could you verify with the debugger if the download loop is executed at all.
how is the changeover to the 2nd range 52k - 104k performed
how long is the resulting file in the end?
does the file contain the first 52k bytes or the 2nd download?
etc
All of that matters and we shouldn't have to guess.
What i would try is to download the image some way that you know that it works and compare the byte result to check where the file gets broken and what is breaking it...
This code is wicked... sorry but you must start by deleting all the code and looking at your problem from the beginning. There are many better ways to accomplish what you want. Just take a look at some good solutions:
http://www.codeproject.com/KB/IP/MyDownloader.aspx

How to insert characters to a file using C#

I have a huge file, where I have to insert certain characters at a specific location. What is the easiest way to do that in C# without rewriting the whole file again.
Filesystems do not support "inserting" data in the middle of a file. If you really have a need for a file that can be written to in a sorted kind of way, I suggest you look into using an embedded database.
You might want to take a look at SQLite or BerkeleyDB.
Then again, you might be working with a text file or a legacy binary file. In that case your only option is to rewrite the file, at least from the insertion point up to the end.
I would look at the FileStream class to do random I/O in C#.
You will probably need to rewrite the file from the point you insert the changes to the end. You might be best always writing to the end of the file and use tools such as sort and grep to get the data out in the desired order. I am assuming you are talking about a text file here, not a binary file.
There is no way to insert characters in to a file without rewriting them. With C# it can be done with any Stream classes. If the files are huge, I would recommend you to use GNU Core Utils inside C# code. They are the fastest. I used to handle very large text files with the core utils ( of sizes 4GB, 8GB or more etc ). Commands like head, tail, split, csplit, cat, shuf, shred, uniq really help a lot in text manipulation.
For example if you need to put some chars in a 2GB file, you can use split -b BYTECOUNT, put the ouptut in to a file, append the new text to it, and get the rest of the content and add to it. This should supposedly be faster than any other way.
Hope it works. Give it a try.
You can use random access to write to specific locations of a file, but you won't be able to do it in text format, you'll have to work with bytes directly.
If you know the specific location to which you want to write the new data, use the BinaryWriter class:
using (BinaryWriter bw = new BinaryWriter (File.Open (strFile, FileMode.Open)))
{
string strNewData = "this is some new data";
byte[] byteNewData = new byte[strNewData.Length];
// copy contents of string to byte array
for (var i = 0; i < strNewData.Length; i++)
{
byteNewData[i] = Convert.ToByte (strNewData[i]);
}
// write new data to file
bw.Seek (15, SeekOrigin.Begin); // seek to position 15
bw.Write (byteNewData, 0, byteNewData.Length);
}
You may take a look at this project:
Win Data Inspector
Basically, the code is the following:
// this.Stream is the stream in which you insert data
{
long position = this.Stream.Position;
long length = this.Stream.Length;
MemoryStream ms = new MemoryStream();
this.Stream.Position = 0;
DIUtils.CopyStream(this.Stream, ms, position, progressCallback);
ms.Write(data, 0, data.Length);
this.Stream.Position = position;
DIUtils.CopyStream(this.Stream, ms, this.Stream.Length - position, progressCallback);
this.Stream = ms;
}
#region Delegates
public delegate void ProgressCallback(long position, long total);
#endregion
DIUtils.cs
public static void CopyStream(Stream input, Stream output, long length, DataInspector.ProgressCallback callback)
{
long totalsize = input.Length;
long byteswritten = 0;
const int size = 32768;
byte[] buffer = new byte[size];
int read;
int readlen = length < size ? (int)length : size;
while (length > 0 && (read = input.Read(buffer, 0, readlen)) > 0)
{
output.Write(buffer, 0, read);
byteswritten += read;
length -= read;
readlen = length < size ? (int)length : size;
if (callback != null)
callback(byteswritten, totalsize);
}
}
Depending on the scope of your project, you may want to decide to insert each line of text with your file in a table datastructure. Sort of like a database table, that way you can insert to a specific location at any given moment, and not have to read-in, modify, and output the entire text file each time. This is given the fact that your data is "huge" as you put it. You would still recreate the file, but at least you create a scalable solution in this manner.
It may be "possible" depending on how the filesystem stores files to quickly insert (ie, add additional) bytes in the middle. If it is remotely possible it may only be feasible to do so a full block at a time, and only by either doing low level modification of the filesystem itself or by using a filesystem specific interface.
Filesystems are not generally designed for this operation. If you need to quickly do inserts you really need a more general database.
Depending on your application a middle ground would be to bunch your inserts together, so you only do one rewrite of the file rather than twenty.
You will always have to rewrite the remaining bytes from the insertion point. If this point is at 0, then you will rewrite the whole file. If it is 10 bytes before the last byte, then you will rewrite the last 10 bytes.
In any case there is no function to directly support "insert to file". But the following code can do it accurately.
var sw = new Stopwatch();
var ab = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ ";
// create
var fs = new FileStream(#"d:\test.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite, 262144, FileOptions.None);
sw.Restart();
fs.Seek(0, SeekOrigin.Begin);
for (var i = 0; i < 40000000; i++) fs.Write(ASCIIEncoding.ASCII.GetBytes(ab), 0, ab.Length);
sw.Stop();
Console.WriteLine("{0} ms", sw.Elapsed.TotalMilliseconds);
fs.Dispose();
// insert
fs = new FileStream(#"d:\test.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite, 262144, FileOptions.None);
sw.Restart();
byte[] b = new byte[262144];
long target = 10, offset = fs.Length - b.Length;
while (offset != 0)
{
if (offset < 0)
{
offset = b.Length - target;
b = new byte[offset];
}
fs.Position = offset; fs.Read(b, 0, b.Length);
fs.Position = offset + target; fs.Write(b, 0, b.Length);
offset -= b.Length;
}
fs.Position = target; fs.Write(ASCIIEncoding.ASCII.GetBytes(ab), 0, ab.Length);
sw.Stop();
Console.WriteLine("{0} ms", sw.Elapsed.TotalMilliseconds);
To gain better performance for file IO, play with "magic two powered numbers" like in the code above. The creation of the file uses a buffer of 262144 bytes (256KB) that does not help at all. The same buffer for the insertion does the "performance job" as you can see by the StopWatch results if you run the code. A draft test on my PC gave the following results:
13628.8 ms for creation and 3597.0971 ms for insertion.
Note that the target byte for insertion is 10, meaning that almost the whole file was rewritten.
Why don't you put a pointer to the end of the file (literally, four bytes above the current size of the file) and then, on the end of file write the length of inserted data, and finally the data you want to insert itself. For example, if you have a string in the middle of the file, and you want to insert few characters in the middle of the string, you can write a pointer to the end of file over some four characters in the string, and then write that four characters to the end together with the characters you firstly wanted to insert. It's all about ordering data. Of course, you can do this only if you are writing the whole file by yourself, I mean you are not using other codecs.

Categories