are bytes handled in the same way in c# and java? - c#

I'm moving a java android app to windows metro, this app has a heavy use of blobs and decoding (the blobs are coded to take less space on the DB
After copying the entire decoding code, the result is slightly different.
There are some parts where the asks if the byte value is lower than 0, as I understand, bytes on c# are alaways unsigned so i don't understand why the result is not the same as the android app.
Here is a snippet.
for (int i = 0; i < length; i++) {
s[six] = (byte) (blob[i] ^ pronpassword[ix]); //pronpass is a string password
if (s[six] == 0) {
s[six + 1] = (byte)'-';
s[six] ^= 128;
s[six] = (byte) PRON_MAP[(byte) s[six]];
six++;
} else {
s[six] = (byte) PRON_MAP[(byte) s[six]];
}
six++;
ix++;
if (ix == plen)
ix = 0;
}
Thanks!

In Java, byte is signed. There is actually no such thing as an unsigned byte in Java. It's equivalent to C#'s sbyte, so that's the type you should port it to.

Related

Converting IntPtr to Audio Data Array

I've been trying to implement a WebRTC audio/video communication on Unity. So far, with the help of this blog post (and Google Translate) and the very few examples on the internet, I've managed to make quite a lot work on Unity using the WebRTC Unity Plugin.
But now I'm stuck. The great Unity sample project by mhama sadly doesn't have an example of how to convert the data I get from the native code to something that can be used as audio data in Unity.
The info I get from the callback is this
(IntPtr data, int bitsPerSample, int sampleRate, int numberOfChannels, int numberOfFrames)
That data in the native code is declared as
const void* audio_data
I know that to create an Audio Clip that Unity can use play some sound, I need a float array with sample values from -1 to 1. How do I go from that IntPtr data and all that extra info to that float array, is something I have no idea how to do.
Here's the sample I'm using as a base
I'm not sure you can do this without unsafe code. You'll need to make sure your project has unsafe code allowed.
// allocate the float arrays for the output.
// numberOfChannels x numberOfFrames
float[][] output = new float[numberOfChannels][];
for (int ch = 0 ; ch < numberOfChannels; ++ch)
output[ch] = new float[numberOfFrames];
// scaling factor for converting signed PCM into float (-1.0 to 1.0)
const double scaleIntToFloat = 1.0/0x7fffffff;
unsafe
{
// obtain a pointer to the raw PCM audio data.
byte *ptr = (byte *)data.ToPointer();
for (int frame = 0 ; frame < numberOfFrames; ++frame)
{
for (int ch = 0 ; ch < numberOfChannels; ++ch)
{
switch (bitsPerSample)
{
case 32:
// shift 4 bytes into the integer value
int intValue = *ptr++ << 24 & *ptr++ << 16 &
*ptr++ << 8 & *ptr++;
// scale the int to float and store.
output[ch][frame] = scaleIntToFloat * intValue;
break;
case 16:
// shift 2 bytes into the integer value. Note:
// shifting into the upper 16 bits to simplify things,
// e.g. multiply by the same scaling factor.
int intValue = *ptr++ << 24 & *ptr++ << 16;
output[ch][frame] = scaleIntToFloat * intValue;
break;
case 24:
...
case 8:
// not 8-bit is typically unsigned. Google it if
// you need to.
}
}
}
}
You can find the other conversions by searching this site for conversion of PCM to float. Also, depending on your circumstances, you might need a different endianness. If so, shift into the intValue in a different byte order.

System.OverflowException computing CRC-16 on C#

I'm trying to use the code provided in this answer but I get a System.OverflowException on the line byte index = (byte)(crc ^ bytes[i]); This happens on the next iteration after the first non-zero byte. I'm not sure what to check.
Thanks in advance.
SharpDevelop Version : 5.1.0.5134-RC-d5052dc5
.NET Version : 4.6.00079
OS Version : Microsoft Windows NT 6.3.9600.0
It may be that you building with arithmetic overflow checking enabled, but the answer assumes that this is not the case. By default, checking is disabled, so it's not uncommon to see this assumption made.
In the code in question:
public static ushort ComputeChecksum(byte[] bytes)
{
ushort crc = 0;
for (int i = 0; i < bytes.Length; ++i)
{
byte index = (byte)(crc ^ bytes[i]);
crc = (ushort)((crc >> 8) ^ table[index]);
}
return crc;
}
crc is an unsigned short while index is a byte, so (crc ^ bytes[i]) could clearly be larger than 255 and make the conversion to byte overflow in a checked environment.
If I change the line to be explicitly unchecked:
byte index = unchecked((byte)(crc ^ bytes[i]));
Then the overflow no longer occurs.

8 byte array back to long (C# to c++)

I'm converting a long to a 8 slot byte array with C#
Byte[] Data = BitConverter.GetBytes(data.LongLength);
For example if data.LongLenght is 172085, I get the following array { 53,160,2,0,0,0,0,0 }
But then after I send this to my c++ server I would like to get it to a long again.
I tryed this but without success...
long fileLenght = 0;
for( int i=0;i < 8; ++i)
fileLenght = (fileLenght << 8) + Data[i];
Whenever you send data across a network you have to mind endianness
In your case, it looks like the proper way to recreate the long from the byte array would be to reconstruct it from right to left:
long fileLength = 0;
for( int i=7; i >= 0; i--)
fileLength = (fileLength << 8) + Data[i];
But this will not always be the case. Depending on the hardware and operating system at the end points, and the network transfer protocols you use you may have data coming in big-endian or little-endian format, and the receiving end may be little-endian or big-endian.
From the documentation:
The order of bytes in the array returned by the GetBytes method depends on whether the computer architecture is little-endian or big-endian.
It looks like on your hardware the array is sent with its least significant bytes first. Therefore, you should start your loop from the end of the array:
int64_t fileLenth = 0;
for( int i=7;i >= 0; --i)
fileLenght = (fileLenght << 8) + Data[i];
Demo. (prints 172085)
In order to achieve better compatibility with C# you should use a system-independent 64-bit integral type instead of long, i.e. int64_t.
If both ends have the same byte order (C# is always little-endian, probably your C++ is also):
long long fileLength;
memcpy(&fileLength, Data, 8);
The optimizing compiler will almost certainly turn that into a single 64-bit move, so don't worry that it looks like an expensive function call.

How do I properly loop through and print bits of an Int, Long, Float, or BigInteger?

I'm trying to debug some bit shifting operations and I need to visualize the bits as they exist before and after a Bit-Shifting operation.
I read from this answer that I may need to handle backfill from the shifting, but I'm not sure what that means.
I think that by asking this question (how do I print the bits in a int) I can figure out what the backfill is, and perhaps some other questions I have.
Here is my sample code so far.
static string GetBits(int num)
{
StringBuilder sb = new StringBuilder();
uint bits = (uint)num;
while (bits!=0)
{
bits >>= 1;
isBitSet = // somehow do an | operation on the first bit.
// I'm unsure if it's possible to handle different data types here
// or if unsafe code and a PTR is needed
if (isBitSet)
sb.Append("1");
else
sb.Append("0");
}
}
Convert.ToString(56,2).PadLeft(8,'0') returns "00111000"
This is for a byte, works for int also, just increase the numbers
To test if the last bit is set you could use:
isBitSet = ((bits & 1) == 1);
But you should do so before shifting right (not after), otherwise you's missing the first bit:
isBitSet = ((bits & 1) == 1);
bits = bits >> 1;
But a better option would be to use the static methods of the BitConverter class to get the actual bytes used to represent the number in memory into a byte array. The advantage (or disadvantage depending on your needs) of this method is that this reflects the endianness of the machine running the code.
byte[] bytes = BitConverter.GetBytes(num);
int bitPos = 0;
while(bitPos < 8 * bytes.Length)
{
int byteIndex = bitPos / 8;
int offset = bitPos % 8;
bool isSet = (bytes[byteIndex] & (1 << offset)) != 0;
// isSet = [True] if the bit at bitPos is set, false otherwise
bitPos++;
}

How to set each bit in a byte array

How do I set each bit in the following byte array which has 21 bytes or 168 bits to either zero or one?
byte[] logonHours
Thank you very much
Well, to clear every bit to zero you can just use Array.Clear:
Array.Clear(logonHours, 0, logonHours.Length);
Setting each bit is slightly harder:
for (int i = 0; i < logonHours.Length; i++)
{
logonHours[i] = 0xff;
}
If you find yourself filling an array often, you could write an extension method:
public static void FillArray<T>(this T[] array, T value)
{
// TODO: Validation
for (int i = 0; i < array.Length; i++)
{
array[i] = value;
}
}
BitArray.SetAll:
System.Collections.BitArray a = new System.Collections.BitArray(logonHours);
a.SetAll(true);
Note that this copies the data from the byte array. It's not just a wrapper around it.
This may be more than you need, but ...
Usually when dealing with individual bits in any data type, I define a const for each bit position, then use the binary operators |, &, and ^.
i.e.
const byte bit1 = 1;
const byte bit2 = 2;
const byte bit3 = 4;
const byte bit4 = 8;
.
.
const byte bit8 = 128;
Then you can turn whatever bits you want on and off using the bit operations.
byte byTest = 0;
byTest = byTest | bit4;
would turn bit 4 on but leave the rest untouched.
You would use the & and ^ to turn them off or do more complex exercises.
Obviously, since you only want to turn all bits up or down then you can just set the byte to 0 or 255. That would turn them all off or on.

Categories