Problems converting ADPCM to PCM in XNA - c#

I'm looking to convert ADPCM data into PCM data from an XNA's .xnb file. (So many abbreviations!)
I've used a couple of places for references, including:
http://www.wooji-juice.com/blog/iphone-openal-ima4-adpcm.html
http://www.cs.columbia.edu/~hgs/audio/dvi/p34.jpg and a couple of others.
I believe that I'm close, as I'm getting a sound that's somewhat similar, but there's a lot of static/corruption in the output sound and I can't seem to figure out why.
The conversion comes down to two functions.
private static byte[] convert(byte[] data)
{
byte[] convertedData = new byte[(data.Length) * 4];
stepSize = 7;
newSample = 0;
index = 0;
var writeCounter = 0;
for (var x = 4; x < data.Length; x++)
{
// First 4 bytes of a block contain initialization information
if ((x % blockSize) < 4)
{
if (x % blockSize == 0) // New block
{
// set predictor/NewSample and index from
// the preamble of the block.
newSample = (short)(data[x + 1] | data[x]);
index = data[x + 2];
}
continue;
}
// Get the first 4 bits from the byte array,
var convertedSample = calculateNewSample((byte)(data[x] >> 4)); // convert 4 bit ADPCM sample to 16 bit PCM sample
// Store 16 bit PCM sample into output byte array
convertedData[writeCounter++] = (byte)convertedSample >> 8;
convertedData[writeCounter++] = (byte)convertedSample & 0x0ff;;
// Convert the next 4 bits of the 8 bit array.
convertedSample = calculateNewSample((byte)(data[x] & 0x0f)); // convert 4 bit ADPCM sample to 16 bit PCM sample.
// Store 16 bit PCM sample into output byte array
convertedData[writeCounter++] = (byte)(convertedSample >> 8);
convertedData[writeCounter++] = (byte)(convertedSample & 0x0ff);
}
// Conversion complete, return data
return convertedData;
}
private static short calculateNewSample(byte sample)
{
Debug.Assert(sample < 16, "Bad sample!");
var indexTable = new int[16] { -1, -1, -1, -1, 2, 4, 6, 8, -1, -1, -1, -1, 2, 4, 6, 8 };
var stepSizeTable = new int[89] { 7, 8, 9, 10, 11, 12, 13, 14, 16, 17,
19, 21, 23, 25, 28, 31, 34, 37, 41, 45,
50, 55, 60, 66, 73, 80, 88, 97, 107, 118,
130, 143, 157, 173, 190, 209, 230, 253, 279, 307,
337, 371, 408, 449, 494, 544, 598, 658, 724, 796,
876, 963, 1060, 1166, 1282, 1411, 1552, 1707, 1878, 2066,
2272, 2499, 2749, 3024, 3327, 3660, 4026, 4428, 4871, 5358,
5894, 6484, 7132, 7845, 8630, 9493, 10442, 11487, 12635, 13899,
15289, 16818, 18500, 20350, 22385, 24623, 27086, 29794, 32767};
var sign = sample & 8;
var delta = sample & 7;
var difference = stepSize >> 3;
// originalsample + 0.5 * stepSize / 4 + stepSize / 8 optimization.
//http://www.cs.columbia.edu/~hgs/audio/dvi/p34.jpg
if ((delta & 4) != 0)
difference += stepSize;
if ((delta & 2) != 0)
difference += stepSize >> 1;
if ((delta & 1) != 0)
difference += stepSize >> 2;
if (sign != 0)
newSample -= (short)difference;
else
newSample += (short)difference;
// Increment index
index += indexTable[sample];
index = (int)MathHelper.Clamp(index, 0, 88);
newSample = (short)MathHelper.Clamp(newSample, -32768, 32767); // clamp between appropriate ranges
// compute new stepSize.
stepSize = stepSizeTable[index];
return newSample;
}
I don't believe the actual calculateNewSample() function is incorrect, as I've passed it the input values from http://www.cs.columbia.edu/~hgs/audio/dvi/p35.jpg and recieved the same output they have. I've tried flipping between the high/low order bytes to see if I've got that backwards to no avail as well. I feel like there's possibly something fundamental that I'm missing out on, but am having trouble finding it.
Any help would be seriously appreciated.

Related

Reading byte array and converting into float in C#

I know there are many questions related to it but still they are not solving my problem. Below is my byte array:
As you can see that the byte is of 28 and each 4-byte value represents a single value i.e. I have a client machine which sent me 2.4 and while reading it, it is then converted into bytes.
//serial port settings and opening it
var serialPort = new SerialPort("COM2", 9600, Parity.Even, 8, StopBits.One);
serialPort.Open();
var stream = new SerialStream(serialPort);
stream.ReadTimeout = 2000;
// send request and waiting for response
// the request needs: slaveId, dataAddress, registerCount
var responseBytes = stream.RequestFunc3(slaveId, dataAddress, registerCount);
// extract the content part (the most important in the response)
var data = responseBytes.ToResponseFunc3().Data;
What I want to do?
Convert each 4 byte one by one to hex, save them In a separate variable. Like
hex 1 = byte[0], hex2 = byte[1], hex3 = byte[2], hex4 = byte[3]
..... hex28 = byte[27]
Combine 4-byte hex value and then convert them into float and assign them a variable to hold floating value. Like
v1 = Tofloat(hex1,hex2,hex3,hex4); // assuming ToFloat() is a function.
How can I achieve it?
Since you mentioned that the first value is 2.4 and each float is represented by 4 bytes;
byte[] data = { 64, 25, 153, 154, 66, 157, 20, 123, 66, 221, 174, 20, 65, 204, 0, 0, 65, 163, 51, 51, 66, 95, 51, 51, 69, 10, 232, 0 };
We can group the bytes into 4 byte blocks and reverse them and convert each part to float like:
int offset = 0;
float[] dataFloats =
data.GroupBy(x => offset++ / 4) // group by 4. 0/4 = 0, 1/4 = 0, 2/4 = 0, 3/4 = 0 and 4/4 = 1 etc.
// Need to reverse the bytes to make them evaluate to 2.4
.Select(x => BitConverter.ToSingle(x.ToArray().Reverse().ToArray(), 0))
.ToArray();
Now you have an array of 7 floats:

How to revert array to orginal form after conditional permute

My code:
var unpermuted = new byte[]{137, 208, 135, 4, 191, 255, 132, 99, 85, 54, 58, 137, 208, 37, 151, 30};
var longKey = new byte[] {75, 79, 84, 69, 197, 129, 75, 65, 74, 65, 75, 75, 79, 84, 69, 197, 129, 75, 65, 74, 65};
var permuted = (byte[])unpermuted.Clone();
for(var i = 0; i < permuted.Length;i++)
{
if (i > 1 && (permuted[i] < longKey[i]))
{
var swapCont = permuted[i - 1];
permuted[i - 1] = permuted[i];
permuted[i] = swapCont;
}
}
printArr(unpermuted);
Console.WriteLine();
printArr(permuted);
// How do I reverse permuted array to unpermuted?
Console.WriteLine();
printArr(permuted);
}
public static void printArr(byte[] arr)
{
for(var i = 0; i < arr.Length;i++)
{
Console.Write(arr[i]);
Console.Write(" ");
}
}
I have unpermute array, make deep copy, and after this, if keyValue is higher than value, I swap with previous element.
And the question is:
How to revert permuted array to unpermuted form having only LongKey Array and Permuted array?
It's not possible to "unpermute" the array, given the information that you have.
Imagine that you have the following:
longkey = [1,2,3,9,5]
array = [3,4,9,5,6]
Running your code, the result will be [3,4,5,9,6].
But if the original array is [3,4,5,9,6], the result is the same.
As you can see, there are multiple permutations of the original array that give the same output. And there's not enough information in the result and the longkey array to tell you what the original array was.
In general, if you have a 3-item sequence anywhere in which the following is true, then it's not possible to reliably reverse the operation.
longkey = [b, c, d]
array = [x, y, z]
Where:
b <= x, b <= y
c < x, c > y
d <= z
For example:
longkey = [...,3,9,5,...]
array = [...,9,5,6,...]
The key here is that the 9 can never swap with the thing to its left, and the 6 will never swap with the 5. So the 5 and 9 cannot move except to swap places with each other. If the original order is [9,5,6], the final order will be [5,9,6]. And if the original order is [5,9,6], the result is, again, [5,9,6].

Error splitting an array into two

So I need to cut off the first 16 bytes from my byte array. I followed another post I saw on Stack Overflow to use the following code:
//split message into iv and encrypted bytes
byte[] iv = new byte[16];
byte[] workingHash = new byte[rage.Length - 16];
//put first 16 bytes into iv
for (int i = 0; i < 16; i++)
{
iv[i] = rage[i];
}
Buffer.BlockCopy(rage, 16, workingHash, 0, rage.Length);
What we are trying here is to cut off the first 16 bytes from the byte[] rage and put the rest into byte[] workingHash
The error occurs at Buffer.BlockCopy(rage, 16, workingHash, 0, rage.Length);
Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
Any help will be much appreciated.
The problem is trivial: Buffer.BlockCopy's last argument requires the correct number of bytes to be copied, which (taking the starting index into account) may not exceed the array's bounds (docs).
Hence the code should look like this, avoiding any for cycles:
Buffer.BlockCopy(rage, 0, iv, 0, 16);
Buffer.BlockCopy(rage, 16, workingHash, 0, rage.Length - 16);
Notice the “- 16” at the second line, fixing the original code. The first line replaces the for cycle for the sake of consistency.
Lets assume rage is a byte array of length 20:
var rage = new byte[20]
{
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20
};
After byte[] iv = new byte[16];, iv will contain:
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
After byte[] workingHash = new byte[rage.Length - 16];, workingHash will contain:
{ 0, 0, 0, 0 }
After the for loop iv is:
{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 }
You need:
Buffer.BlockCopy(rage, 16, workingHash, 0, rage.Length - 16);
Copy rage.Length - 16 (4) elements from rage's 16th element (which is 17) to workingHash starting from the 0th element.
The result:
{ 17, 18, 19, 20 }
By the way there is a very readable way, probably not as fast as copying arrays, but worth mentioning:
var firstSixteenElements = rage.Take(16).ToArray();
var remainingElements = rage.Skip(16).ToArray();
Fixed:
//split message into iv and encrypted bytes
byte[] iv = new byte[16];
byte[] workingHash = new byte[rage.Length - 16];
//put first 16 bytes into iv
for (int i = 0; i < 16; i++)
{
iv[i] = rage[i];
}
for (int i = 0; i < rage.Length - 16; i++)
{
workingHash[i] = rage[i + 16];
}

De Bruijn algorithm binary digit count 64bits C#

Im using the "De Bruijn" Algorithm to discover the number of digits in binary that a big number (up to 64bits) has.
For example:
1022 has 10 digits in binary.
130 has 8 digits in binary.
I found that using a table lookup based on De Bruijn give me the power to calculate this x100 times faster than conventional ways (power, square, ...).
According to this website, 2^6 has the table to calculate the 64 bits numbers. this would be the table exposed in c#
static readonly int[] MultiplyDeBruijnBitPosition2 = new int[64]
{
0,1,2,4,8,17,34,5,11,23,47,31,63,62,61,59,
55,46,29,58,53,43,22,44,24,49,35,7,15,30,60,57,
51,38,12,25,50,36,9,18,37,10,21,42,20,41,19,39,
14,28,56,48,33,3,6,13,27,54,45,26,52,40,16,32
};
(I dont know if i brought the table from that website correctly)
Then, based on the R.. comment here. I should use this to use the table with the input uint64 number.
public static int GetLog2_DeBruijn(ulong v)
{
return MultiplyDeBruijnBitPosition2[(ulong)(v * 0x022fdd63cc95386dull) >> 58];
}
But the c# compiler doesnt allow me to use "0x022fdd63cc95386dull" because it overflows 64bits. And i have to use "0x022fdd63cc95386d" instead.
Using those codes. The problem is that i am not getting the correct result for the input given.
For example, doing 1.000.000 calculations of the number:
17012389719861204799 (64bits used) This is the result:
Using pow2 method i get the result 64 1 Million times in 1380ms.
Using DeBruijn method i get the result 40 1 Million times in 32ms. (Dont know why 40)
Im trying to understand how "De Bruijn" works, and how can i fix this and create a final code for c# to calculate up to 64bits numbers.
UDPATE and benchmarks of different solutions
I was looking for the fastest algorithm to get the number of digits in binary that a unsigned given number of 64bits has in c# (known as ulong).
For example:
1024 has 11 binary digits. (2^10+1) or (log2[1024]+1)
9223372036854775808 has 64 binary digits. (2^63+1) or (log2[2^63]+1)
The conventional power of 2 and square is extremely slow. and just for 10000 calculations it needs 1500ms to get the answer. (100M calculations needs hours).
Here, Niklas B., Jim Mischel, and Spender brought differents methods to make this faster.
SIMD and SWAR Techniques //Provided by Spender (Answer here)
De_Bruijn Splited 32bits //Provided by Jim Mischel (Answer here)
De_Bruijn 64bits version //Provided by Niklas B. (Answer here)
De_Bruijn 128bits version //Also provided by Niklas B. (Answer here)
Testing this Methods with a CPU Q6600 overclocked to 3Ghz using Windows 7 (64bits) Gives the following results.
As you can see, it takes just a few seconds to find correctly 100,000,000 of request given, being De_Bruijn 128bits version the fastest.
Thanks a lot to all of you, you help me a lot with this. I hope this helps you too.
You should check R..'s answer and his resource again. The question that he responded to was how to find the log2 for powers of two.
The bit twiddling website says that the simple multiplication + shift only works "If you know that v is a power of 2". Otherwise you need to round up to the next power of two first:
static readonly int[] bitPatternToLog2 = new int[64] {
0, // change to 1 if you want bitSize(0) = 1
1, 2, 53, 3, 7, 54, 27, 4, 38, 41, 8, 34, 55, 48, 28,
62, 5, 39, 46, 44, 42, 22, 9, 24, 35, 59, 56, 49, 18, 29, 11,
63, 52, 6, 26, 37, 40, 33, 47, 61, 45, 43, 21, 23, 58, 17, 10,
51, 25, 36, 32, 60, 20, 57, 16, 50, 31, 19, 15, 30, 14, 13, 12
}; // table taken from http://chessprogramming.wikispaces.com/De+Bruijn+Sequence+Generator
static readonly ulong multiplicator = 0x022fdd63cc95386dUL;
public static int bitSize(ulong v) {
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
v |= v >> 32;
// at this point you could also use popcount to find the number of set bits.
// That might well be faster than a lookup table because you prevent a
// potential cache miss
if (v == (ulong)-1) return 64;
v++;
return MultiplyDeBruijnBitPosition2[(ulong)(v * multiplicator) >> 58];
}
Here is a version with a larger lookup table that avoids the branch and one addition. I found the magic number using random search.
static readonly int[] bitPatternToLog2 = new int[128] {
0, // change to 1 if you want bitSize(0) = 1
48, -1, -1, 31, -1, 15, 51, -1, 63, 5, -1, -1, -1, 19, -1,
23, 28, -1, -1, -1, 40, 36, 46, -1, 13, -1, -1, -1, 34, -1, 58,
-1, 60, 2, 43, 55, -1, -1, -1, 50, 62, 4, -1, 18, 27, -1, 39,
45, -1, -1, 33, 57, -1, 1, 54, -1, 49, -1, 17, -1, -1, 32, -1,
53, -1, 16, -1, -1, 52, -1, -1, -1, 64, 6, 7, 8, -1, 9, -1,
-1, -1, 20, 10, -1, -1, 24, -1, 29, -1, -1, 21, -1, 11, -1, -1,
41, -1, 25, 37, -1, 47, -1, 30, 14, -1, -1, -1, -1, 22, -1, -1,
35, 12, -1, -1, -1, 59, 42, -1, -1, 61, 3, 26, 38, 44, -1, 56
};
static readonly ulong multiplicator = 0x6c04f118e9966f6bUL;
public static int bitSize(ulong v) {
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
v |= v >> 32;
return bitPatternToLog2[(ulong)(v * multiplicator) >> 57];
}
You should definitely check other tricks to compute the log2 and consider using the MSR assembly instruction if you are on x86(_64). It gives you the index of the most significant set bit, which is exactly what you need.
After perusing various bit-twiddling info, this is how I'd do it... don't know how this stacks up next to DeBruijn, but should be considerably faster than using powers.
ulong NumBits64(ulong x)
{
return (Ones64(Msb64(x) - 1ul) + 1ul);
}
ulong Msb64(ulong x)
{
//http://aggregate.org/MAGIC/
x |= (x >> 1);
x |= (x >> 2);
x |= (x >> 4);
x |= (x >> 8);
x |= (x >> 16);
x |= (x >> 32);
return(x & ~(x >> 1));
}
ulong Ones64(ulong x)
{
//https://chessprogramming.wikispaces.com/SIMD+and+SWAR+Techniques
const ulong k1 = 0x5555555555555555ul;
const ulong k2 = 0x3333333333333333ul;
const ulong k4 = 0x0f0f0f0f0f0f0f0ful;
x = x - ((x >> 1) & k1);
x = (x & k2) + ((x >> 2) & k2);
x = (x + (x >> 4)) & k4;
x = (x * 0x0101010101010101ul) >> 56;
return x;
}
When I looked into this a while back for 32 bits, the DeBruijn sequence method was by far the fastest. See https://stackoverflow.com/a/10150991/56778
What you could do for 64 bits is split the number into two 32-bit values. If the high 32 bits is non-zero, then run the DeBruijn calculation on it, and then add 32. If the high 32 bits is zero, then run the DeBruijn calculation on the low 32 bits.
Something like this:
int NumBits64(ulong val)
{
if (val > 0x00000000FFFFFFFFul)
{
// Value is greater than largest 32 bit number,
// so calculate the number of bits in the top half
// and add 32.
return 32 + GetLog2_DeBruijn((int)(val >> 32));
}
// Number is no more than 32 bits,
// so calculate number of bits in the bottom half.
return GetLog2_DeBruijn((int)(val & 0xFFFFFFFF));
}
int GetLog2_DeBruijn(int val)
{
uint32 v = (uint32)val;
int r; // result goes here
static const int MultiplyDeBruijnBitPosition[32] =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1; // first round down to one less than a power of 2
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
r = MultiplyDeBruijnBitPosition[(uint32_t)(v * 0x07C4ACDDU) >> 27];
return r;
}
Edit: This solution is not recommanded as it requires branching for Zero.
After reading Niklas B's answer I spent a few hours on researching this, and realize all magic multiplicator has to be in the last nth in order to suit for 64-elements lookup table (I don't have the necessary knowledge to explain why).
So I used exactly the same generator mentioned by that answer to find the last sequence, here is the C# code:
// used generator from http://chessprogramming.wikispaces.com/De+Bruijn+Sequence+Generator
static readonly byte[] DeBruijnMSB64table = new byte[]
{
0 , 47, 1 , 56, 48, 27, 2 , 60,
57, 49, 41, 37, 28, 16, 3 , 61,
54, 58, 35, 52, 50, 42, 21, 44,
38, 32, 29, 23, 17, 11, 4 , 62,
46, 55, 26, 59, 40, 36, 15, 53,
34, 51, 20, 43, 31, 22, 10, 45,
25, 39, 14, 33, 19, 30, 9 , 24,
13, 18, 8 , 12, 7 , 6 , 5 , 63,
};
// the cyclc number has to be in the last 16th of all possible values
// any beyond the 62914560th(0x03C0_0000) should work for this purpose
const ulong DeBruijnMSB64multi = 0x03F79D71B4CB0A89uL; // the last one
public static byte GetMostSignificantBit(this ulong value)
{
value |= value >> 1;
value |= value >> 2;
value |= value >> 4;
value |= value >> 8;
value |= value >> 16;
value |= value >> 32;
return DeBruijnMSB64table[value * DeBruijnMSB64multi >> 58];
}

Hex to Byte Array in C# and Java Gives Different Results

First of all, sorry for the long post, I want to include all my thoughts so it's easier for you guys to find what's wrong about my code.
I want to transfer an Hex string from a C# application to a Java application. But, when I convert the same Hex value to a Byte Array on both languages, the output is different.
For instance, the same Hex value gives
[101, 247, 11, 173, 46, 74, 56, 137, 185, 38, 40, 191, 204, 104, 83, 154]
in C# and
[101, -9, 11, -83, 46, 74, 56, -119, -71, 38, 40, -65, -52, 104, 83, -102]
in Java
Here are the methods I use in C#:
public static string ByteArrayToHexString(byte[] byteArray)
{
return BitConverter.ToString(byteArray).Replace("-",""); //To convert the whole array
}
public static byte[] HexStringToByteArray(string hexString)
{
byte[] HexAsBytes = new byte[hexString.Length / 2];
for (int index = 0; index < HexAsBytes.Length; index++)
{
string byteValue = hexString.Substring(index * 2, 2);
HexAsBytes[index] = byte.Parse(byteValue, NumberStyles.HexNumber, CultureInfo.InvariantCulture);
}
return HexAsBytes;
}
And the ones in Java:
public static String ByteArrayToHexString(byte[] bytes) {
StringBuilder builder = new StringBuilder();
for (byte b: bytes) {
builder.append(String.format("%02x", b));
}
return builder.toString().toUpperCase();
}
public static byte[] HexStringToByteArray(String s) {
int len = s.length();
byte[] data = new byte[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4)
+ Character.digit(s.charAt(i+1), 16));
}
return data;
}
Here's an example in C#:
String hexString = "65F70BAD2E4A3889B92628BFCC68539A";
byte[] byteArray = HexBytes.HexStringToByteArray(hexString);
//Using the debugger, byteArray = [101, 247, 11, 173, 46, 74, 56, 137, 185, 38, 40, 191, 204, 104, 83, 154]
String hexString2 = HexBytes.ByteArrayToHexString(byteArray)
Console.Write("HEX: " + hexString2);
//Outputs 65F70BAD2E4A3889B92628BFCC68539A
And an example in Java:
String hexString = "65F70BAD2E4A3889B92628BFCC68539A";
byte[] byteArray = HexBytes.HexStringToByteArray(hexString);
//Using the debugger, byteArray = [101, -9, 11, -83, 46, 74, 56, -119, -71, 38, 40, -65, -52, 104, 83, -102]
String hexString2 = HexBytes.ByteArrayToHexString(byteArray);
System.out.println("HEX: " + hexString2);
//Outputs 65F70BAD2E4A3889B92628BFCC68539A
As you can see, when I do the opposite operation, the final Hex value is equal to the first one, which means the way I convert is potentially good in both languages individually. But I don't understand why the conversion from Hex to a byte array is different in both languages. I thought Hexadecimal was simply a number on another base.
Thanks for the help
Cydrick
Update
I fixed this issue by replacing the C# code with the following code:
public static string ByteArrayToHexString(sbyte[] byteArray)
{
return BitConverter.ToString(convert(byteArray)).Replace("-", ""); //To convert the whole array
}
public static sbyte[] HexStringToByteArray(string hexString)
{
byte[] HexAsBytes = new byte[hexString.Length / 2];
for (int index = 0; index < HexAsBytes.Length; index++)
{
string byteValue = hexString.Substring(index * 2, 2);
HexAsBytes[index] = byte.Parse(byteValue, NumberStyles.HexNumber, CultureInfo.InvariantCulture);
}
return convert(HexAsBytes);
}
private static sbyte[] convert(byte[] byteArray)
{
sbyte[] sbyteArray = new sbyte[byteArray.Length];
for (int i = 0; i < sbyteArray.Length; i++)
{
sbyteArray[i] = unchecked((sbyte) byteArray[i]);
}
return sbyteArray;
}
private static byte[] convert(sbyte[] sbyteArray)
{
byte[] byteArray = new byte[sbyteArray.Length];
for (int i = 0; i < byteArray.Length; i++)
{
byteArray[i] = (byte) sbyteArray[i];
}
return byteArray;
}
But, when I convert the same Hex value to a Byte Array on both languages, the output is different.
All you're seeing is that bytes are signed in Java and unsigned in C#. So if you add 256 to any negative value in Java, you'll get the value shown in C#. The actual bits in the values are the same - it's just a matter of whether the top bit is treated as a sign bit or not.
EDIT: As noted in comments, if you're ever using the byte as an integer outside the debugger output, you can always use:
int someInt = someByte & 0xff;
to get the unsigned value.
Looks like this is a debugger issue only. Those negative values you're seeing in the Java debugger are the signed equivalents to the unsigned values you're seeing in the C# debugger. For example, signed byte -9 == unsigned byte 247 (notice that they always differ by 256). The data is fine.

Categories