Solved, thanks. It's base64 and this works.
System.Convert.FromBase64String(columns[6]);
Thanks again.
I got a TSV file, and inside this file there're audio stored as original string and related wave info. One wave and its info per line. What I need to do is to read each line, get the audio, and save them as separated wave files.
One sample of wave string is like this:
UklGRpgiAABXQVZFZm10ICwAAACUAgEAgD4AAPAKAAA4AAAAGgABAA8AKACOAgEAgD4AANAHAAAoAAAAAgBAAWRhdGFYIgAAQMQKQPQTQTQSQTQSQTQUQYNBVBNBJBJBJBNBJBRBRKFEE0EkE0FEFEE0FEFUFQAAAAAAAAAAAADACOmRY92lbj7+7kGhMFC3V9I3qMyjX2G8vAclkKFxUlD26mS+1qCRMV4OuVCxXf/IxrFBj///9sAG0iRqqUOIIRKT/4vqBtdWJF6pI/mWgPFx6JlUIFUPm6gofbyf93hJ6NCbgja88uTflydp///
And I tried to read this line and use:
byte[] waveContext = Encoding.Default.GetBytes(columns[6]);
File.WriteAllBytes(waveFullPath, waveContext);
But the output file contains just the same string.
Does anybody got ideas on how to handle this?
Many thanks.
Looks like a base 64 encoded string.
byte[] waveBytes = System.Convert.FromBase64String(base64EncodedData);
Don't know how to build up a wave file. This may help you: MSDN: Creating wav files in C#. (10secs lookup with Google.)
There is no such thing as "audio stored as original string".
Audio data is binary data usually (these days) two channels à 16Bits interleaved
prepended by a short (binary) header which contains number of channels and sampling rate.
Read about "RIFF WAV" file format.
Related
I am an audio noob
I am looking to embed audio in an html page by passing the data as a string such as
< Audio src="data:audio/wav;base64,AA....." />
doing that works, but I need to raise the volume. I tried working with NAudio but it seems like it does some conversion and it will no longer play. This is the code I use to raise the volume:
public string ConvertToString(Stream audioStream)
{
audioStream.Seek(0,SeekOrigin.Begin);
byte[] bytes = new byte[audioStream.Length];
audioStream.Read(bytes,0,(int)audioStream.Length);
audioStream.Seek(0,SeekOrigin.Begin);
return Convert.ToBase64String(bytes);
}
var fReader = new WaveFileReader(strm);
var chan32 = new WaveChannel32(fReader,50.0F,0F);
var ouputString = "data:audio/wav;base64," + ConvertToString(chan32);
but when I put outputString into an audio tag it fails to play. What type of transformation does NAudio do, and how can I get it ton give me the audio stream in such a way that I can serialize it and the browser will be able to play it?
Or for another suggestion: if NAudio to heavyweight for something as simple as raising the volume what's a better option for me?
I'm no expert in embedding WAV files in web pages (and to be honest it doesn't seem like a good idea - WAV is one of the most inefficient ways of delivering sound to a web page), but I'd expect that the entier WAV file, including headers needs to be encoded. You are just writing out the sample data. So with NAudio you'd need to use a WaveFileWriter writing into a MemoryStream or a temporary file to create a volume adjusted WAV file that can be written to your page.
There are two additional problems with your code. One is that you have gone to 32 bit floating point, making the WAV file format even more inefficent (doubling the size of the original file). You need to use the Wave32To16Stream to go back to 16 bit before creating the WAV file. The second is that you are multiplying each sample by 50. This will almost certainly horribly distort the signal. Clipping can very easily occur when amplifying a WAV file, and it depends on how much headroom there is in the original recording. Often dynamic range compression is a better option than simply increasing the volume.
I have a byte[] array and want to write it to stdout: Console.Out.Write(arr2str(arr)). How to convert byte[] to string, so that app.exe > arr.txt does the expected thing? I just want to save the array to a file using a pipe, but encodings mess things up.
I'd later want to read that byte array from stdin: app.exe < arr.txt and get the same thing.
How can I do these two things: write and read byte arrays to/from stdin/stdout?
EDIT:
I'm reading with string s = Console.In.ReadToEnd(), and then System.Text.Encoding.Default.GetBytes(s). I'm converting from array to string with System.Text.Encoding.Default.GetString(bytes), but this doesn't work when used with <,>. By "doesn't work" I mean that writing and reading over a pipe does not return the same thing.
To work with binary files you want Console.OpenStandardInput() to retrieve a Stream that you can read from. This has been covered in other threads here at SO, this one for example: Read binary data from Console.In
If you are writing to Console.WriteLine you need to encode the text in to a printable format. If you want to output to a file as a binary you can't use Console.WriteLine
If you still need to output to the console you either need to open the raw stream with Console.OpenStandardOutput() or call Convert.ToBase64String to turn the byte array to a string. There is also Convert.FromBase64String to come back from base64 to a byte array.
I need to get the sample values of sound data of a WAV file so that by using those sample values i need to get the amplitude values of that sound data in every second.
Important: Is there any way to get audio data sample values using Naudio library or wmp library?
I am getting the sample values in this way:
byte[] data = File.ReadAllBytes(File_textBox.Text);
var samples=new int[data.Length];
int x = 0;
for (int i = 44; i <data.Length; i += 2)
{
samples[x] = BitConverter.ToInt16(data, i);
x++;
}
But I am getting negative values more like (-326260). so is this right or wrong?
I mean can a sample value be negative or not and if it is correct then what does it mean a sound or silence?
NAudio can do this for you, it's in the library (I think it's the WaveStream or WaveReader class, or something similar). I can recommend it's use, if it's not too much overhead.
If you want to roll-your-own, and want to deal with arbitrary wave files, you'll have to read up on the WAV file format, and analyze the header yourself.
Although in general a WAV file contains 16bit samples, it doesn't have to, and depending on the exact format, they might be stored in little endian or big endian.
The header contains information about sample rate, number of channels, bits per sample, bytes per sample and such like, which allow you to do the actual math to get exactly one sample.
If I have a single MemoryStream of which I know I sent multiple files (example 5 files) to this MemoryStream. Is it possible to read from this MemoryStream and be able to break apart file by file?
My gut is telling me no since when we Read, we are reading byte by byte... Any help and a possible snippet would be great. I haven't been able to find anything on google or here :(
You can't directly, not if you don't delimit the files in some way or know the exact size of each file as it was put into the buffer.
You can use a compressed file such as a zip file to transfer multiple files instead.
A stream is just a line of bytes. If you put the files next to each other in the stream, you need to know how to separate them. That means you must know the length of the files, or you should have used some separator. Some (most) file types have a kind of header, but looking for this in an entire stream may not be waterproof either, since the header of a file could just as well be data in another file.
So, if you need to write files to such a stream, it is wise to add some extra information. For instance, start with a version number, then, write the size of the first file, write the file itself and then write the size of the next file, etc....
By starting with a version number, you can make alterations to this format. In the future you may decide you need to store the file name as well. In that case, you can increase version number, make up a new format, and still be able to read streams that you created earlier.
This is of course especially useful if you store these streams too.
Since you're sending them, you'll have to send them into the stream in such a way that you'll know how to pull them out. The most common way of doing this is to use a length specification. For example, to write the files to the stream:
write an integer to the stream to indicate the number of files
Then for each file,
write an integer (or a long if the files are large) to indicate the number of bytes in the file
write the file
To read the files back,
read an integer (n) to determine the number of files in the stream
Then, iterating n times,
read an integer (or long if that's what you chose) to determine the number of bytes in the file
read the file
You could use an IEnumerable<Stream> instead.
You need to implement this yourself, what you would want to do is write in some sort of 'delimited' into the stream. As you're reading, look for that delimited, and you'll know when you have hit a new file.
Here's a quick and dirty example:
byte[] delimiter = System.Encoding.Default.GetBytes("++MyDelimited++");
ms.Write(myFirstFile);
ms.Write(delimiter);
ms.Write(mySecondFile);
....
int len;
do {
len = ms.ReadByte(buffer, lastOffest, delimiter.Length);
if(buffer == delimiter)
{
// Close and open a new file stream
}
// Write buffer to output stream
} while(len > 0);
I have recorded audio to a CaptureBuffer, but I can't figure out how to save it into wav file. I have tried this (http://www.tek-tips.com/faqs.cfm?fid=4782), but it didn't work, or I didn't use it properly. Does anybody know how to solve this? Sample code would be very appreciated.
A WAV file is a RIFF file consisting of two main "chunks". The first is a format chunk, describing the format of the audio. This will be what sample rate you recorded at (e.g. 44.1kHz), what bit depth (e.g. 16 bits) and how many channels (mono or stereo). WAV also supports compressed formats but it is unlikely you recorded the audio compressed, so your record buffer will contain PCM audio.
Then there is the data chunk. This is the part of the WAV file that contains the actual audio data in your capture buffer. This must be in the format described in the format chunk.
As part of the NAudio project I created a WaveFileWriter class to simplify creating WAV files. You pass in a WaveFormat that describes the format of your captured audio. Then you can simply write the raw captured data in.
Here's some simple example code for how you might use WaveFileWriter:
WaveFormat format = new WaveFormat(16000, 16, 1); // for mono 16 bit audio captured at 16kHz
using (var writer = new WaveFileWriter("out.wav", format)
{
writer.WriteData(captureBuffer, 0, captureBuffer.Length);
}