Error splitting an array into two - c#

So I need to cut off the first 16 bytes from my byte array. I followed another post I saw on Stack Overflow to use the following code:
//split message into iv and encrypted bytes
byte[] iv = new byte[16];
byte[] workingHash = new byte[rage.Length - 16];
//put first 16 bytes into iv
for (int i = 0; i < 16; i++)
{
iv[i] = rage[i];
}
Buffer.BlockCopy(rage, 16, workingHash, 0, rage.Length);
What we are trying here is to cut off the first 16 bytes from the byte[] rage and put the rest into byte[] workingHash
The error occurs at Buffer.BlockCopy(rage, 16, workingHash, 0, rage.Length);
Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
Any help will be much appreciated.

The problem is trivial: Buffer.BlockCopy's last argument requires the correct number of bytes to be copied, which (taking the starting index into account) may not exceed the array's bounds (docs).
Hence the code should look like this, avoiding any for cycles:
Buffer.BlockCopy(rage, 0, iv, 0, 16);
Buffer.BlockCopy(rage, 16, workingHash, 0, rage.Length - 16);
Notice the “- 16” at the second line, fixing the original code. The first line replaces the for cycle for the sake of consistency.

Lets assume rage is a byte array of length 20:
var rage = new byte[20]
{
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20
};
After byte[] iv = new byte[16];, iv will contain:
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
After byte[] workingHash = new byte[rage.Length - 16];, workingHash will contain:
{ 0, 0, 0, 0 }
After the for loop iv is:
{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 }
You need:
Buffer.BlockCopy(rage, 16, workingHash, 0, rage.Length - 16);
Copy rage.Length - 16 (4) elements from rage's 16th element (which is 17) to workingHash starting from the 0th element.
The result:
{ 17, 18, 19, 20 }
By the way there is a very readable way, probably not as fast as copying arrays, but worth mentioning:
var firstSixteenElements = rage.Take(16).ToArray();
var remainingElements = rage.Skip(16).ToArray();

Fixed:
//split message into iv and encrypted bytes
byte[] iv = new byte[16];
byte[] workingHash = new byte[rage.Length - 16];
//put first 16 bytes into iv
for (int i = 0; i < 16; i++)
{
iv[i] = rage[i];
}
for (int i = 0; i < rage.Length - 16; i++)
{
workingHash[i] = rage[i + 16];
}

Related

Why does filling the indices array from a file corrupt the buffer values in Silk.Net?

I'm reading the book Game Programming in C++ from Madhav Sanjay. I did the exercises in C++ and try to migrate them to C# with the Silk.Net library, which is an OpenGL wrapper in .Net: https://github.com/lehmamic/GameProgrammingExercises.
I have a weird behavior while loading the mesh. When I fill in the indices array manually, the indices get transferred to OpenGL correctly (I see that in RenderDoc).
var indices = new uint[]
{
2, 1, 0,
3, 9, 8,
4, 11, 10,
5, 11, 12,
6, 14, 13,
7, 14, 15,
18, 17, 16,
19, 17, 18,
22, 21, 20,
23, 21, 22,
26, 25, 24,
27, 25, 26,
};
var vbo = new BufferObject<float>(game.Renderer.GL, vertices, BufferTargetARB.ArrayBuffer);
var ebo = new BufferObject<uint>(game.Renderer.GL, indices, BufferTargetARB.ElementArrayBuffer);
var vao = new VertexArrayObject(game.Renderer.GL, vbo, ebo);
But when I fill the indices array from a json file content (The indices are stored as an array of array).
{
"version":1,
"vertexformat":"PosNormTex",
"shader":"BasicMesh",
"textures":[
"Assets/Cube.png"
],
"specularPower":100.0,
"vertices":[
[-0.5,-0.5,-0.5,0,0,-1,0,0],
[0.5,-0.5,-0.5,0,0,-1,1,0],
[-0.5,0.5,-0.5,0,0,-1,0,-1],
[0.5,0.5,-0.5,0,0,-1,1,-1],
[-0.5,0.5,0.5,0,1,0,0,-1],
[0.5,0.5,0.5,0,1,0,1,-1],
[-0.5,-0.5,0.5,0,0,1,0,0],
[0.5,-0.5,0.5,0,0,1,1,0],
[-0.5,0.5,-0.5,0,0,-1,0,-1],
[0.5,-0.5,-0.5,0,0,-1,1,0],
[-0.5,0.5,-0.5,0,1,0,0,-1],
[0.5,0.5,-0.5,0,1,0,1,-1],
[-0.5,0.5,0.5,0,1,0,0,-1],
[-0.5,0.5,0.5,0,0,1,0,-1],
[0.5,0.5,0.5,0,0,1,1,-1],
[-0.5,-0.5,0.5,0,0,1,0,0],
[-0.5,-0.5,0.5,0,-1,0,0,0],
[0.5,-0.5,0.5,0,-1,0,1,0],
[-0.5,-0.5,-0.5,0,-1,0,0,0],
[0.5,-0.5,-0.5,0,-1,0,1,0],
[0.5,-0.5,-0.5,1,0,0,1,0],
[0.5,-0.5,0.5,1,0,0,1,0],
[0.5,0.5,-0.5,1,0,0,1,-1],
[0.5,0.5,0.5,1,0,0,1,-1],
[-0.5,-0.5,0.5,-1,0,0,0,0],
[-0.5,-0.5,-0.5,-1,0,0,0,0],
[-0.5,0.5,0.5,-1,0,0,0,-1],
[-0.5,0.5,-0.5,-1,0,0,0,-1]
],
"indices":[
[2,1,0],
[3,9,8],
[4,11,10],
[5,11,12],
[6,14,13],
[7,14,15],
[18,17,16],
[19,17,18],
[22,21,20],
[23,21,22],
[26,25,24],
[27,25,26]
]
}
It gets corrupted somehow. I don't get why, because the content of the array is exactly the same at the end (See the implementation in the Mesh.cs file).
var indices = new uint[raw.Indices.Length * 3];
for (int i = 0; i < raw.Indices.Length; i++)
{
var index = raw.Indices[i];
if (index is null || index.Length != 3)
{
throw new MeshException($"Invalid indices for {fileName}.");
}
var offset = i * 3;
for (int j = 0; j < index.Length; j++)
{
indices[offset + j] = index[j];
}
}
I have no clue what that could be.

Reading byte array and converting into float in C#

I know there are many questions related to it but still they are not solving my problem. Below is my byte array:
As you can see that the byte is of 28 and each 4-byte value represents a single value i.e. I have a client machine which sent me 2.4 and while reading it, it is then converted into bytes.
//serial port settings and opening it
var serialPort = new SerialPort("COM2", 9600, Parity.Even, 8, StopBits.One);
serialPort.Open();
var stream = new SerialStream(serialPort);
stream.ReadTimeout = 2000;
// send request and waiting for response
// the request needs: slaveId, dataAddress, registerCount
var responseBytes = stream.RequestFunc3(slaveId, dataAddress, registerCount);
// extract the content part (the most important in the response)
var data = responseBytes.ToResponseFunc3().Data;
What I want to do?
Convert each 4 byte one by one to hex, save them In a separate variable. Like
hex 1 = byte[0], hex2 = byte[1], hex3 = byte[2], hex4 = byte[3]
..... hex28 = byte[27]
Combine 4-byte hex value and then convert them into float and assign them a variable to hold floating value. Like
v1 = Tofloat(hex1,hex2,hex3,hex4); // assuming ToFloat() is a function.
How can I achieve it?
Since you mentioned that the first value is 2.4 and each float is represented by 4 bytes;
byte[] data = { 64, 25, 153, 154, 66, 157, 20, 123, 66, 221, 174, 20, 65, 204, 0, 0, 65, 163, 51, 51, 66, 95, 51, 51, 69, 10, 232, 0 };
We can group the bytes into 4 byte blocks and reverse them and convert each part to float like:
int offset = 0;
float[] dataFloats =
data.GroupBy(x => offset++ / 4) // group by 4. 0/4 = 0, 1/4 = 0, 2/4 = 0, 3/4 = 0 and 4/4 = 1 etc.
// Need to reverse the bytes to make them evaluate to 2.4
.Select(x => BitConverter.ToSingle(x.ToArray().Reverse().ToArray(), 0))
.ToArray();
Now you have an array of 7 floats:

Convert int to different size of byte array

I have a byte array result. I would like to convert my type called Info which are all int to the byte array but all of them are in different size.
a = 4 bytes
b = 3 bytes
c = 2 bytes
d = 1 bytes
This is the code I've tried.
private byte[] getInfoByteArray(Info data)
{
byte[] result = new byte[10];
BitConverter.GetBytes((data.a)).CopyTo(result, 0);
BitConverter.GetBytes((data.b)).CopyTo(result, 4);
BitConverter.GetBytes((data.c)).CopyTo(result, 7);
result [9] = Convert.ToByte(data.d);
return result;
}
However, I found out that BitConverter.GetBytes returns 4 bytes.
Are there any general solutions that can get different size of bytes to a byte array?
Use Array.Copy(Array, Int32, Array, Int32, Int32) method:
byte[] result = new byte[10];
Array.Copy(BitConverter.GetBytes(data.a), 0, result, 0, 4);
Array.Copy(BitConverter.GetBytes(data.b), 0, result, 4, 3);
Array.Copy(BitConverter.GetBytes(data.c), 0, result, 7, 2);
Array.Copy(BitConverter.GetBytes(data.d), 0, result, 9, 1);
This assumes little endian hardware. If your hardware is big endian, use
byte[] result = new byte[10];
Array.Copy(BitConverter.GetBytes(data.a), 0, result, 0, 4);
Array.Copy(BitConverter.GetBytes(data.b), 1, result, 4, 3);
Array.Copy(BitConverter.GetBytes(data.c), 2, result, 7, 2);
Array.Copy(BitConverter.GetBytes(data.d), 3, result, 9, 1);

6 bytes timestamp to DateTime

I use 3rd party API. According to its specification the following
byte[] timestamp = new byte[] {185, 253, 177, 161, 51, 1}
represents Number of milliseconds from Jan 1, 1970 when the message
was generated for transmission
The issue is that I don't know how it could be translated into DateTime.
I've tried
DateTime Epoch = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
long milliseconds = BitConverter.ToUInt32(timestamp, 0);
var result = Epoch + TimeSpan.FromMilliseconds(milliseconds);
The result is {2/1/1970 12:00:00 AM}, but year 2012 is expected.
byte[] timestamp = new byte[] { 185, 253, 177, 161, 51, 1, 0, 0, };
DateTime Epoch = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
ulong milliseconds = BitConverter.ToUInt64(timestamp, 0);
var result = Epoch + TimeSpan.FromMilliseconds(milliseconds);
Result is 11/14/2011
Adding padding code special for CodeInChaos:
byte[] oldStamp = new byte[] { 185, 253, 177, 161, 51, 1 };
byte[] newStamp = new byte[sizeof(UInt64)];
Array.Copy(oldStamp, newStamp, oldStamp.Length);
For running on big-endian machines :
if (!BitConverter.IsLittleEndian)
{
newStamp = newStamp.Reverse().ToArray();
}
I assume timestamp uses little endian format. I also left out parameter validation.
long GetLongLE(byte[] buffer,int startIndex,int count)
{
long result=0;
long multiplier=1;
for(int i=0;i<count;i++)
{
result += buffer[startIndex+i]*multiplier;
multiplier *= 256;
}
return result;
}
long milliseconds = GetLongLE(timestamp, 0, 6);

Problems converting ADPCM to PCM in XNA

I'm looking to convert ADPCM data into PCM data from an XNA's .xnb file. (So many abbreviations!)
I've used a couple of places for references, including:
http://www.wooji-juice.com/blog/iphone-openal-ima4-adpcm.html
http://www.cs.columbia.edu/~hgs/audio/dvi/p34.jpg and a couple of others.
I believe that I'm close, as I'm getting a sound that's somewhat similar, but there's a lot of static/corruption in the output sound and I can't seem to figure out why.
The conversion comes down to two functions.
private static byte[] convert(byte[] data)
{
byte[] convertedData = new byte[(data.Length) * 4];
stepSize = 7;
newSample = 0;
index = 0;
var writeCounter = 0;
for (var x = 4; x < data.Length; x++)
{
// First 4 bytes of a block contain initialization information
if ((x % blockSize) < 4)
{
if (x % blockSize == 0) // New block
{
// set predictor/NewSample and index from
// the preamble of the block.
newSample = (short)(data[x + 1] | data[x]);
index = data[x + 2];
}
continue;
}
// Get the first 4 bits from the byte array,
var convertedSample = calculateNewSample((byte)(data[x] >> 4)); // convert 4 bit ADPCM sample to 16 bit PCM sample
// Store 16 bit PCM sample into output byte array
convertedData[writeCounter++] = (byte)convertedSample >> 8;
convertedData[writeCounter++] = (byte)convertedSample & 0x0ff;;
// Convert the next 4 bits of the 8 bit array.
convertedSample = calculateNewSample((byte)(data[x] & 0x0f)); // convert 4 bit ADPCM sample to 16 bit PCM sample.
// Store 16 bit PCM sample into output byte array
convertedData[writeCounter++] = (byte)(convertedSample >> 8);
convertedData[writeCounter++] = (byte)(convertedSample & 0x0ff);
}
// Conversion complete, return data
return convertedData;
}
private static short calculateNewSample(byte sample)
{
Debug.Assert(sample < 16, "Bad sample!");
var indexTable = new int[16] { -1, -1, -1, -1, 2, 4, 6, 8, -1, -1, -1, -1, 2, 4, 6, 8 };
var stepSizeTable = new int[89] { 7, 8, 9, 10, 11, 12, 13, 14, 16, 17,
19, 21, 23, 25, 28, 31, 34, 37, 41, 45,
50, 55, 60, 66, 73, 80, 88, 97, 107, 118,
130, 143, 157, 173, 190, 209, 230, 253, 279, 307,
337, 371, 408, 449, 494, 544, 598, 658, 724, 796,
876, 963, 1060, 1166, 1282, 1411, 1552, 1707, 1878, 2066,
2272, 2499, 2749, 3024, 3327, 3660, 4026, 4428, 4871, 5358,
5894, 6484, 7132, 7845, 8630, 9493, 10442, 11487, 12635, 13899,
15289, 16818, 18500, 20350, 22385, 24623, 27086, 29794, 32767};
var sign = sample & 8;
var delta = sample & 7;
var difference = stepSize >> 3;
// originalsample + 0.5 * stepSize / 4 + stepSize / 8 optimization.
//http://www.cs.columbia.edu/~hgs/audio/dvi/p34.jpg
if ((delta & 4) != 0)
difference += stepSize;
if ((delta & 2) != 0)
difference += stepSize >> 1;
if ((delta & 1) != 0)
difference += stepSize >> 2;
if (sign != 0)
newSample -= (short)difference;
else
newSample += (short)difference;
// Increment index
index += indexTable[sample];
index = (int)MathHelper.Clamp(index, 0, 88);
newSample = (short)MathHelper.Clamp(newSample, -32768, 32767); // clamp between appropriate ranges
// compute new stepSize.
stepSize = stepSizeTable[index];
return newSample;
}
I don't believe the actual calculateNewSample() function is incorrect, as I've passed it the input values from http://www.cs.columbia.edu/~hgs/audio/dvi/p35.jpg and recieved the same output they have. I've tried flipping between the high/low order bytes to see if I've got that backwards to no avail as well. I feel like there's possibly something fundamental that I'm missing out on, but am having trouble finding it.
Any help would be seriously appreciated.

Categories