Edit Wav File Samples for Increasing Amplitude using c# - c#

I have the following c# code which reads a wave file into a byte array and then writes it into another array, creating a new wave file. I found it online and dont have much knowledge about it.
I want to change the values being written into the new array so that my amplitude is increased by let's say a value 'x' for the new wave file being generated.. Where and what should be the changes made?
private void ReadWavfiles(string fileName)
{
byte[] fa = File.ReadAllBytes(fileName);
int startByte = 0;
// look for data header
for (var x = 0; x < fa.Length; x++)
if (fa[x] == 'd' && fa[x + 1] == 'a' && fa[x + 2] == 't' && fa[x + 3] == 'a')
{
startByte = x + 8;
break;
}
var buff = new byte[fa.Length / 2];
var y = 0;
var length = fa.Length;
for (int s = startByte; s < length; s = s + 2)
buff[y++] = (byte)(fa[s + 1] * 0x100 + fa[s]);
write(buff, "D:\\as1.wav", "D:\\us1.wav");
WaveOut obj = new WaveOut();
obj.Init(new WaveFileReader("D:\\as1.wav"));
obj.Play();
}
private void write(byte[] buffer, string fileOut, string fileIn)
{
WaveFileReader reader = new WaveFileReader(fileIn);
int numChannels = reader.WaveFormat.Channels, sampleRate = reader.WaveFormat.SampleRate;
int BUFFER_SIZE = 1024 * 16;
using (WaveFileWriter writer = new WaveFileWriter(fileOut, new WaveFormat(sampleRate, 16, numChannels)))
{
int bytesRead;
while (true)
{
bytesRead = reader.Read(buffer, 0, BUFFER_SIZE);
if (bytesRead != 0)
writer.Write(buffer, 0, bytesRead);
else break;
}
writer.Close();
}
}

Related

opus and NAudio streaming out of sync

I am adding voip in the game and since Unity's Microphone class is not supported in Web_GL and is already slow and gives floats instead of bytes. Now some people suggested me to use codec i.e Opus and then I found its wrapper along with its demo which used NAudio, well I was fairly happy with it, it was using some extra loops which after removing also gave the same result but anyway it also gave 4000 bytes with 48k sample rate which I reduced to 8k and max buffer size to 350. Here's the code for that script
private void Start()
{
//StartEncoding();
UnityEditor.EditorApplication.playmodeStateChanged = PlayModeStateChangedHandler;
}
private void PlayModeStateChangedHandler()
{
if (UnityEditor.EditorApplication.isPaused)
{
StopEncoding();
}
}
public void StartGame()
{
StartEncoding();
}
private void StartEncoding()
{
_client = FindObjectOfType<Client>();
_client.AudioReceivers += UpdateAudioOutput;
_startTime = DateTime.Now;
_bytesSent = 0;
_segmentFrames = 160;
_encoder = OpusEncoder.Create(8000, 1, FragLabs.Audio.Codecs.Opus.Application.Voip);
_encoder.MaxDataBytes = 350;
_encoder.Bitrate = 4000;
_decoder = OpusDecoder.Create(8000, 1);
_decoder.MaxDataBytes = 175;
_bytesPerSegment = _encoder.FrameByteCount(_segmentFrames);
_waveIn = new WaveIn(WaveCallbackInfo.FunctionCallback());
_waveIn.BufferMilliseconds = 50;
_waveIn.DeviceNumber = 0;
_waveIn.DataAvailable += _waveIn_DataAvailable;
_waveIn.WaveFormat = new WaveFormat(8000, 16, 1);
_playBuffer = new BufferedWaveProvider(new WaveFormat(8000, 16, 1));
_playBuffer.DiscardOnBufferOverflow = true;
_waveOut = new WaveOut(WaveCallbackInfo.FunctionCallback());
_waveOut.DeviceNumber = 0;
_waveOut.Init(_playBuffer);
_waveOut.Play();
_waveIn.StartRecording();
if (_timer == null)
{
_timer = new Timer();
_timer.Interval = 1000;
_timer.Elapsed += _timer_Tick;
}
_timer.Start();
}
private void _timer_Tick(object sender, EventArgs e)
{
var timeDiff = DateTime.Now - _startTime;
var bytesPerSecond = _bytesSent / timeDiff.TotalSeconds;
}
byte[] _notEncodedBuffer = new byte[0];
private void _waveIn_DataAvailable(object sender, WaveInEventArgs e)
{
byte[] soundBuffer = new byte[e.BytesRecorded + _notEncodedBuffer.Length];
for (int i = 0; i < _notEncodedBuffer.Length; i++)
soundBuffer[i] = _notEncodedBuffer[i];
for (int i = 0; i < e.BytesRecorded; i++)
soundBuffer[i + _notEncodedBuffer.Length] = e.Buffer[i];
int byteCap = _bytesPerSegment;
int segmentCount = (int)Math.Floor((decimal)soundBuffer.Length / byteCap);
int segmentsEnd = segmentCount * byteCap;
int notEncodedCount = soundBuffer.Length - segmentsEnd;
_notEncodedBuffer = new byte[notEncodedCount];
for (int i = 0; i < notEncodedCount; i++)
{
_notEncodedBuffer[i] = soundBuffer[segmentsEnd + i];
}
for (int i = 0; i < segmentCount; i++)
{
byte[] segment = new byte[byteCap];
for (int j = 0; j < segment.Length; j++)
segment[j] = soundBuffer[(i * byteCap) + j];
int len;
byte[] buff = _encoder.Encode(segment, segment.Length, out len);
SendToServer(buff, len);
}
}
public void UpdateAudioOutput(byte[] ba, int len)
{
int outlen = len;
byte[] buff = new byte[len];
buff = _decoder.Decode(ba, outlen, out outlen);
_playBuffer.AddSamples(buff, 0, outlen);
}
private void SendToServer(byte[] EncodedAudio, int Length)
{
print("SENDING AUDIO");
//print("audio length : " + EncodedAudio.Length);
_client.Send(EncodedAudio, Length);
//UpdateAudioOutput(EncodedAudio, Length);
}
private void StopEncoding()
{
_timer.Stop();
_waveIn.StopRecording();
_waveIn.Dispose();
_waveIn = null;
_waveOut.Stop();
_waveOut.Dispose();
_waveOut = null;
_playBuffer = null;
_encoder.Dispose();
_encoder = null;
_decoder.Dispose();
_decoder = null;
}
private void OnApplicationQuit()
{
StopEncoding();
}
Now here is the tcp send and receive, they are pretty much same for the client and the server
public void Send(byte[] data, int customParamLen = 0)
{
if (!socketReady)
{
return;
}
byte messageType = (1 << 3); // assume that 0000 1000 would be the Message type
byte[] message = data;
byte[] length = BitConverter.GetBytes(message.Length);
byte[] customParam = BitConverter.GetBytes(customParamLen); //length also 4/sizeof(int)
byte[] buffer = new byte[sizeof(int) + message.Length + 1 + customParam.Length];
buffer[0] = messageType;
//Enter length in the buffer
for (int i = 0; i < sizeof(int); i++)
{
buffer[i + 1] = length[i];
}
//Enter data in the buffer
for (int i = 0; i < message.Length; i++)
{
buffer[i + 1 + sizeof(int)] = message[i];
}
//Enter custom Param in the buffer
for (int i = 0; i < sizeof(int); i++)
{
buffer[i + 1 + sizeof(int) + message.Length] = customParam[i];
}
heavyStream.Write(buffer, 0, buffer.Length);
print("Writtin bytes");
}
if (heavyStream.DataAvailable)
{
print("Data Receiving YAY!");
//Get message Type
byte messageType = (byte)heavyStream.ReadByte();
//Get length of the Data
byte[] lengthBuffer = new byte[sizeof(int)];
int recv = heavyStream.Read(lengthBuffer, 0, lengthBuffer.Length);
if (recv == sizeof(int))
{
int messageLen = BitConverter.ToInt32(lengthBuffer, 0);
//Get the Data
byte[] messageBuffer = new byte[messageLen];
recv = heavyStream.Read(messageBuffer, 0, messageBuffer.Length);
if (recv == messageLen)
{
// messageBuffer contains the whole message ...
//Get length paramater needed for opus to decode
byte[] customParamAudioLen = new byte[sizeof(int)];
recv = heavyStream.Read(customParamAudioLen, 0, customParamAudioLen.Length);
if (recv == sizeof(int))
{
AudioReceivers(messageBuffer, BitConverter.ToInt32(customParamAudioLen, 0) - 5);
print("Done! Everything went straight as planned");
}
}
}
Now the problem is that the audio is choppy and has gaps in them, as the time flies the more out of sync it becomes.
UPDATE
Still not fixed.
It looks like you're just sending audio straight out with no jitter buffer on the receiving end. This means if you have any variability in latency you'll start to hear gaps.
What you need to do is buffer audio on the client side - until you have a good amount, say 400ms, then start playing. That gives you a buffer of extra time to account for jitter.
This is a very naive approach, but gives you something to play with - you'll probably want to look at adaptive jitter buffers, and probably switch to UDP instead of TCP to get better performance. With UDP you will need to deal with lost packets, out of order etc.
Have a look at Speex which has a Jitter Buffer https://github.com/xiph/speex or Mumble which uses Speex for VOIP https://github.com/mumble-voip/mumble

How to get string by FileStream.Position and length specified in c#

I have function that searches for string in a large binary file and give its position to me.
How to implement a function that read that position and give me string after specific length.
As we do in String.Substring()
Here is the code I have so far.
public void example()
{
string match = "400000002532"; //This is 12 chars in hex of the string to search
byte[] matchBytes = StringToByteArray(match);
foreach (var jsFile in jsscan)
{
using (var fs = new FileStream(jsFile, FileMode.Open))
{
int i = 0;
int readByte;
while ((readByte = fs.ReadByte()) != -1)
{
if (matchBytes[i] == readByte)
{
i++;
}
else
{
i = 0;
}
if (i == matchBytes.Length)
{
Console.WriteLine("It found between {0} and {1}.",
fs.Position - matchBytes.Length, fs.Position);
break;
}
}
}
}
}
public static byte[] StringToByteArray(String hex)
{
int NumberChars = hex.Length;
byte[] bytes = new byte[NumberChars / 2];
for (int i = 0; i < NumberChars; i += 2)
bytes[i / 2] = Convert.ToByte(hex.Substring(i, 2), 16);
return bytes;
}
Example what I am searching is below
400000002532010953667A51E44BE5B6A59417B71F4B91BBE590B6AF6E84EF570C32C56400E05123B0A44AF389331E4B91B02E8980B85157F910D7238918A73012B6243F772F7B60E5A7CF6E8CB25374B8FF96311130AABD9F71A860C904C9F6AE9706E570CC0E881E997762710EDE8818CCC551BA05579D30C0D53CEBD9BAF0C2E557D7B9D37A9C94A8A9B5FA7FCF7973B0BDA88A06DE1AE357130E4A06018ABB0A1ABD818DABEB518649CF885953EE05564FD69F0E2F860175667C5FC84F1C97727CEA1C841BFA86A26BABA942E0275FAB2A8F78132E3A05404F0DCD02FD4E7CAD08B1FFD4C2184400F22F6EBC14857BCC2E2AF858BE20CBB807C3467A91E38F31901FD452B5F87F296174631980E039CAB58D97E8F91E3255DD7DEF3177D68A4943F629A70B421B1D6E53DC0D26A1B5EF7C6912F48E0842037FA72B17C18E11B93AEE4DDA0FFE6F217BD5DEB957B1C26169029DE4396103D1F89FA0856489B1958DE5C896DB8F27A24C21AC66BF2095E383DA5EC6DA7138FE82C62FDE9BEFF0308F507736F1B35B1CA083F6C96A6860889BDCCBC989E86F4FB1C483E71557369E7308450330AEF8C9A13A115E8A97642E4A0A4098F5BC04A096A22E5F97116B59AE17BCAEFD2A8B0BCB5341EC64CA3E474900D5A8A620448A6C97827C42332C4DD326572A3C5DB4DA1362F3C0012E1AA1B70C812DCCAEF74F67E94E907518CA31945DD56A61A7
If performance is not of huge concern, you could do the following, which is more easy and readable
using (var fs = new StreamReader(fileName))
{
var content = await fs.ReadToEndAsync();
var pos = content.IndexOf(matchBytes);
if (pos != -1)
{
Console.WriteLine($"Found # {pos}, {pos + matchBytes.Length}");
}
}
Assuming you know which Encoding is used to store characters in the Stream, try this function:
static string GetString(Stream stream, long position, int stringLength, Encoding encoding) {
int offset = 0;
int readByte;
byte[] buffer = new byte[stream.Length - position];
stream.Seek(position, SeekOrigin.Begin);
while ((readByte = stream.ReadByte()) != -1)
{
buffer[offset++] = (byte)readByte;
if (encoding.GetCharCount(buffer, 0, offset) == stringLength + 1)
{
return encoding.GetString(buffer, 0, offset - 1);
}
}
if (encoding.GetCharCount(buffer, 0, offset) == stringLength)
{
return encoding.GetString(buffer, 0, offset);
}
throw new Exception(string.Format("Stream doesn't contains {0} characters", stringLength));
}
For example, with your code and utf-16:
using (var fs = new FileStream(jsFile, FileMode.Open))
{
int i = 0;
int readByte;
while ((readByte = fs.ReadByte()) != -1)
{
if (matchBytes[i] == readByte)
{
i++;
}
else
{
i = 0;
}
if (i == matchBytes.Length)
{
Console.WriteLine("It found between {0} and {1}.",
fs.Position - matchBytes.Length, fs.Position);
//Desired string length in charachters
const int DESIRED_STRING_LENGTH = 5;
Console.WriteLine(GetString(fs, fs.Position, DESIRED_STRING_LENGTH, Encoding.Unicode));
break;
}
}
}

Convert 2 Alaw stream to 1 Stereo wav file

I have 2 alaw streams that i'm trying to convert to a stereo wav file.
The result sounds bad, I can hear the content but there is an annoying squicking sound that doesn't exist in the original files.
The code I'm doing:
For the wav header:
public static byte[] GetHeader(int fileLength, int bitsPerSample, int numberOfChannels, int sampleRate) // bitsPerSample = 8, numberOfChannels= 1 or 2 (mono or stereo) , sampleRate = 8000
{
MemoryStream memStream = new MemoryStream(WAVE_HEADER_SIZE);
BinaryWriter writer = new BinaryWriter(memStream);
writer.Write(Encoding.ASCII.GetBytes("RIFF"), 0, 4); //4 bytes of RIFF description
writer.Write((fileLength - 8) / 2);
writer.Write(Encoding.ASCII.GetBytes("WAVE"), 0, 4);
writer.Write(Encoding.ASCII.GetBytes("fmt "), 0, 4);
writer.Write(16); //Chunk size
writer.Write((short)6); //wFormatTag (1 == uncompressed PCM, 6 = A law compression)
writer.Write((short)numberOfChannels); //nChannels
writer.Write(sampleRate); //nSamplesPerSec
int blockAllign = bitsPerSample / 8 * numberOfChannels;
writer.Write(sampleRate * blockAllign); //nAvgBytesPerSec
writer.Write((short)blockAllign); //nBlockAlign
writer.Write((short)bitsPerSample); //wBitsPerSample
writer.Write(Encoding.ASCII.GetBytes("data"), 0, 4);
writer.Write(fileLength / 2 - WAVE_HEADER_SIZE); //data length
writer.Flush();
byte[] buffer = new byte[memStream.Length];
memStream.Position = 0;
memStream.Read(buffer, 0, buffer.Length);
writer.Close();
return buffer;
}
The data (the original streams are read from alaw files):
do
{
if (channels == 2)
{
if (dlRead == 0)
{
dlRead = dlStream.Read (dlBuffer, 0, dlBuffer.Length);
}
if (ulRead == 0)
{
ulRead =ulStream.Read (ulBuffer, 0, ulBuffer.Length);
}
if ((dlRead != 0) && (ulRead != 0))
{
//Create the stero wave buffer
Array.Clear(waveBuffer, 0, waveBuffer.Length);
for (int i = 0; i < dlRead / 2; ++i)
{
waveBuffer[i * 4 + 0] = dlBuffer[i * 2];
waveBuffer[i * 4 + 1] = dlBuffer[i * 2 + 1];
}
for (int i = 0; i < ulRead / 2; ++i)
{
waveBuffer[i * 4 + 2] = ulBuffer[i * 2];
waveBuffer[i * 4 + 3] = ulBuffer[i * 2 + 1];
}
bytesToWrite = Math.Max(ulRead * 2, dlRead * 2);
dlRead = 0;
ulRead = 0;
}
else
bytesToWrite = 0;
}
else
{
//Create the mono wave buffer
if (metadata.ULExists)
{
bytesToWrite = ulStream.Read(ulBuffer, 0, ulBuffer.Length);
Buffer.BlockCopy(ulBuffer, 0, waveBuffer, 0, ulBuffer.Length);
}
else if (metadata.DLExists)
{
bytesToWrite = dlStream.Read(dlBuffer, 0, dlBuffer.Length);
Buffer.BlockCopy(dlBuffer, 0, waveBuffer, 0, dlBuffer.Length);
}
else
{
bytesToWrite = 0;
}
}
if (bytesToWrite > 0)
fileStream.Write(waveBuffer, 0, bytesToWrite);
} while (bytesToWrite > 0);
What am I doing wrong? Any ideas?

reading from two mono files and converting to one stero wav file- without the ability to use Naudion

I'm trying to read from two alaw/pcm files in the following way:
byte[] dlBuffer = null;
if (dlStream != null)
{
dlStream.Position = 0;
dlBuffer = new byte[dlStream.Length+1];
int readDL = dlStream.Read(dlBuffer, 0, dlBuffer.Length);
}
byte[] ulBuffer = null;
if (ulStream != null)
{
ulStream.Position = 0;
ulBuffer = new byte[ulStream.Length+1];
int readUL = ulStream.Read(ulBuffer, 0, ulBuffer.Length);
}
And then to save the buffers to one wav file:
private const int WAVE_HEADER_SIZE = 44;
private const int WAVE_BUFFER_SIZE = 2 * 1024 * 1024; // = 2,097,152
private bool SaveBufferToWave(string wavFileName, VoiceMetadata metadata,byte[] dlBuffer,byte[] ulBuffer)
{
if ((wavFileName == null) || (wavFileName.Length == 0) || (metadata == null))
return false;
FileStream fileStream = null;
bool success = true;
try
{
byte[] waveBuffer = new byte[WAVE_BUFFER_SIZE];
int bytesToWrite = 0;
int dlRead = 0, ulRead = 0;
//If we have DL & UL write stereo wav, else write mono wave
int channels = (metadata.DLExists && metadata.ULExists) ? 2 : 1;
int samples = (int)metadata.TotalTimeMS * (VoiceMetadata.SAMPLES_PER_SECOND / 1000);
fileStream = new FileStream(wavFileName, FileMode.Create, FileAccess.ReadWrite, FileShare.None);
BinaryWriter wrt = new BinaryWriter(fileStream);
wrt.Write(GetHeader(
WAVE_HEADER_SIZE + (samples) * (channels) * sizeof(short), sizeof(short) * 8,
channels,
8000));
if (channels == 2)
{
if (dlRead == 0)
{
dlRead = dlBuffer.Length;
}
if (ulRead == 0)
{
ulRead = ulBuffer.Length;
}
if ((dlRead != 0) && (ulRead != 0))
{
//Create the stero wave buffer
Array.Clear(waveBuffer, 0, waveBuffer.Length);
for (int i = 0; i < dlRead / 2; ++i)
{
waveBuffer[i * 4 + 0] = dlBuffer[i * 2];
waveBuffer[i * 4 + 1] = dlBuffer[i * 2 + 1];
}
for (int i = 0; i < ulRead / 2; ++i)
{
waveBuffer[i * 4 + 2] = ulBuffer[i * 2];
waveBuffer[i * 4 + 3] = ulBuffer[i * 2 + 1];
}
bytesToWrite = Math.Max(ulRead * 2, dlRead * 2);
dlRead = 0;
ulRead = 0;
}
else
bytesToWrite = 0;
}
else
{
//Create the mono wave buffer
if (metadata.ULExists)
{
Buffer.BlockCopy(ulBuffer, 0, waveBuffer, 0, ulBuffer.Length);
bytesToWrite = ulBuffer.Length;
}
else if (metadata.DLExists)
{
Buffer.BlockCopy(dlBuffer, 0, waveBuffer, 0, dlBuffer.Length);
bytesToWrite = dlBuffer.Length;
}
else
{
bytesToWrite = 0;
}
}
if (bytesToWrite > 0)
fileStream.Write(waveBuffer, 0, bytesToWrite);
Logger.Debug("File {0} was saved successfully in wav format.", wavFileName);
}
catch (Exception e)
{
Logger.Error("Failed saving file {0} in wav format, Exception: {1}.", wavFileName, e);
success = false;
}
finally
{
if (fileStream != null)
fileStream.Close();
}
return success;
}
Most of the times it works fine, but sometimes I get one of this two exceptions:
Failed saving file c:\Samples\WAV\24112014-095948.283.wav in wav format, Exception: System.ArgumentException: Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
Failed saving file c:\Samples\WAV\24112014-100742.409.wav in wav format, Exception: System.IndexOutOfRangeException: Index was outside the bounds of the array.
What can be the problem?

Remove range from byte array

I have a file in a byte[], and I need to remove 0x1000 bytes at every x offset (I have a list of offsets). I have some code to do it, but I'm wondering if there is a better way to do it;
private byte[] RemoveHashBlocks(byte[] fileArray, STFSExplorer stfs)
{
long startOffset = StartingOffset;
long endOffset = StartingOffset + Size;
List<long> hashBlockOffsets = new List<long>();
foreach (xBlockEntry block in stfs._stfsBlockEntry)
if (block.IsHashBlock && (block.BlockOffset >= startOffset && block.BlockOffset <= endOffset))
hashBlockOffsets.Add(block.BlockOffset - (hashBlockOffsets.Count * 0x1000));
byte[] newFileAray = new byte[fileArray.Length - (hashBlockOffsets.Count * 0x1000)];
for (int offset = 0; offset < fileArray.Length; offset++)
if (hashBlockOffsets[0] == offset)
{
offset += 0x1000;
hashBlockOffsets.RemoveAt(0);
}
else
newFileAray[offset] = fileArray[offset];
return newFileAray;
}
private byte[] RemoveHashBlocks(byte[] fileArray, STFSExplorer stfs)
{
long startOffset = StartingOffset;
long size = Size;
MemoryStream ms = new MemoryStream();
long lastBlockEnd = 0;
foreach (xBlockEntry block in stfs._stfsBlockEntry)
{
if (block.IsHashBlock)
{
long offset = block.BlockOffset - startOffset;
if (offset + 0x1000 > 0 && offset < size)
{
if (offset > lastBlockEnd)
{
ms.Write(fileArray, (int) lastBlockEnd,
(int) (offset - lastBlockEnd));
}
lastBlockEnd = offset + 0x1000;
}
}
}
if (lastBlockEnd < size)
{
ms.Write(fileArray, (int) lastBlockEnd, (int) (size - lastBlockEnd));
}
return ms.ToArray();
}
This assumes stfs._stfsBlockEntry is sorted by BlockOffset.

Categories