Node.js to c# converting - c#

I have this function is Node.js
// #param {BigInteger} checksum
// #returns {Uint8Array}
function checksumToUintArray(checksum) {
var result = new Uint8Array(8);
for (var i = 0; i < 8; ++i) {
result[7 - i] = checksum.and(31).toJSNumber();
checksum = checksum.shiftRight(5);
}
return result;
}
What would be the equivalent in c#?
I'm thinking:
public static uint[] ChecksumToUintArray(long checksum)
{
var result = new uint[8];
for (var i = 0; i < 8; ++i)
{
result[7 - i] = (uint)(checksum & 31);
checksum = checksum >> 5;
}
return result;
}
But I'm no sure.
My main dilemma is the "BigInteger" type (but not only).
Any help would be appropriated.

UInt8 is "unsigned 8-bit integer". In C# that's byte, because uint is "unsigned 32-bit integer". So UInt8Array is byte[].
Javascript BigInteger corresponds to C# BigInteger (from System.Numerics dll or nuget package), not to long. In some cases, long might be enough. For example, if BigInteger in javascript algorithm is used only because there is no such thing as 64bit integer in javascript - then it's fine to replace it with long in C#. But in general, without any additional information about expected ranges - range of javascript BigInteger is much bigger than range of C# long.
Knowing that, your method becomes:
public static byte[] ChecksumToUintArray(BigInteger checksum) {
var result = new byte[8];
for (var i = 0; i < 8; ++i) {
result[7 - i] = (byte) (checksum & 31);
checksum = checksum >> 5;
}
return result;
}

Related

convert from BitArray to 16-bit unsigned integer in c#

BitArray bits=new BitArray(16); // size 16-bit
There is bitArray and I want to convert 16-bit from this array to unsigned integer in c# ,
I can not use copyto for convert, is there other method for convert from 16-bit to UInt16?
You can do it like this:
UInt16 res = 0;
for (int i = 0 ; i < 16 ; i++) {
if (bits[i]) {
res |= (UInt16)(1 << i);
}
}
This algorithm checks the 16 least significant bits one by one, and uses the bitwise OR operation to set the corresponding bit of the result.
You can loop through it and compose the value itself.
var bits = new BitArray(16);
bits[1] = true;
var value = 0;
for (int i = 0; i < bits.Length; i++)
{
if (lBits[i])
{
value |= (1 << i);
}
}
This should do the work
private uint BitArrayToUnSignedInt(BitArray bitArray)
{
ushort res = 0;
for(int i= bitArray.Length-1; i != 0;i--)
{
if (bitArray[i])
{
res = (ushort)(res + (ushort) Math.Pow(2, bitArray.Length- i -1));
}
}
return res;
}
You can check this another anwser already in stackoverflow of that question:
Convert bit array to uint or similar packed value

How can I calculate Longitudinal Redundancy Check (LRC)?

I've tried the example from wikipedia: http://en.wikipedia.org/wiki/Longitudinal_redundancy_check
This is the code for lrc (C#):
/// <summary>
/// Longitudinal Redundancy Check (LRC) calculator for a byte array.
/// ex) DATA (hex 6 bytes): 02 30 30 31 23 03
/// LRC (hex 1 byte ): EC
/// </summary>
public static byte calculateLRC(byte[] bytes)
{
byte LRC = 0x00;
for (int i = 0; i < bytes.Length; i++)
{
LRC = (LRC + bytes[i]) & 0xFF;
}
return ((LRC ^ 0xFF) + 1) & 0xFF;
}
It said the result is "EC" but I get "71", what I'm doing wrong?
Thanks.
Here's a cleaned up version that doesn't do all those useless operations (instead of discarding the high bits every time, they're discarded all at once in the end), and it gives the result you observed. This is the version that uses addition, but that has a negation at the end - might as well subtract and skip the negation. That's a valid transformation even in the case of overflow.
public static byte calculateLRC(byte[] bytes)
{
int LRC = 0;
for (int i = 0; i < bytes.Length; i++)
{
LRC -= bytes[i];
}
return (byte)LRC;
}
Here's the alternative LRC (a simple xor of bytes)
public static byte calculateLRC(byte[] bytes)
{
byte LRC = 0;
for (int i = 0; i < bytes.Length; i++)
{
LRC ^= bytes[i];
}
return LRC;
}
And Wikipedia is simply wrong in this case, both in the code (doesn't compile) and in the expected result.
Guess this one looks cooler ;)
public static byte calculateLRC(byte[] bytes)
{
return bytes.Aggregate<byte, byte>(0, (x, y) => (byte) (x^ y));
}
If someone wants to get the LRC char from a string:
public static char CalculateLRC(string toEncode)
{
byte[] bytes = Encoding.ASCII.GetBytes(toEncode);
byte LRC = 0;
for (int i = 0; i < bytes.Length; i++)
{
LRC ^= bytes[i];
}
return Convert.ToChar(LRC);
}
The corrected Wikipedia version is as follows:
private byte calculateLRC(byte[] b)
{
byte lrc = 0x00;
for (int i = 0; i < b.Length; i++)
{
lrc = (byte)((lrc + b[i]) & 0xFF);
}
lrc = (byte)(((lrc ^ 0xff) + 2) & 0xFF);
return lrc;
}
I created this for Arduino to understand the algorithm (of course it's not written in the most efficient way)
String calculateModbusAsciiLRC(String input)
{
//Refer this document http://www.simplymodbus.ca/ASCII.htm
if((input.length()%2)!=0) { return "ERROR COMMAND SHOULD HAVE EVEN NUMBER OF CHARACTERS"; }
// Make sure to omit the semicolon in input string and input String has even number of characters
byte byteArray[input.length()+1];
input.getBytes(byteArray, sizeof(byteArray));
byte LRC = 0;
for (int i = 0; i <sizeof(byteArray)/2; i++)
{
// Gettting the sum of all registers
uint x=0;
if(47<byteArray[i*2] && byteArray[i*2] <58) {x=byteArray[i*2] -48;}
else { x=byteArray[i*2] -55; }
uint y=0;
if(47<byteArray[i*2+1] && byteArray[i*2+1] <58) {y=byteArray[i*2+1] -48;}
else { y=byteArray[i*2+1] -55; }
LRC += x*16 + y;
}
LRC = ~LRC + 1; // Getting twos Complement
String checkSum = String(LRC, HEX);
checkSum.toUpperCase(); // Converting to upper case eg: bc to BC - Optional somedevices are case insensitve
return checkSum;
}
I realize that this question pretty old, but I had trouble figuring out how to do this. It's working now, so I figured I should paste the code. In my case, the checksum needs to return as an ASCII string.
public function getLrc($string)
{
$LRC = 0;
// Get hex checksum.
foreach (str_split($string, 1) as $char) {
$LRC ^= ord($char);
}
$hex = dechex($LRC);
// convert hex to string
$str = '';
for($i=0;$i<strlen($hex);$i+=2) $str .= chr(hexdec(substr($hex,$i,2)));
return $str;
}

Data conversion issue possibly, char to unsigned char. A software and firmware CRC32 interaction issue

My current issue is that I am computing a CRC32 hash in software and then checking it in the firmware, however when I compute the hash in firmware its double what it is supposed to be.
software(written in C#):
public string SCRC(string input)
{
//Calculate CRC-32
Crc32 crc32 = new Crc32();
string hash = "";
byte[] convert = Encoding.ASCII.GetBytes(input);
MemoryStream ms = new MemoryStream(System.Text.Encoding.Default.GetBytes(input));
foreach (byte b in crc32.ComputeHash(ms))
hash += b.ToString("x2").ToLower();
return hash;
}
firmware functions used(written in C):
unsigned long chksum_crc32 (unsigned char *block, unsigned int length)
{
register unsigned long crc;
unsigned long i;
crc = 0xFFFFFFFF;
for (i = 0; i < length; i++)
{
crc = ((crc >> 8) & 0x00FFFFFF) ^ crc_tab[(crc ^ *block++) & 0xFF];
}
return (crc ^ 0xFFFFFFFF);
}
/* chksum_crc32gentab() -- to a global crc_tab[256], this one will
* calculate the crcTable for crc32-checksums.
* it is generated to the polynom [..]
*/
void chksum_crc32gentab ()
{
unsigned long crc, poly;
int i, j;
poly = 0xEDB88320L;
for (i = 0; i < 256; i++)
{
crc = i;
for (j = 8; j > 0; j--)
{
if (crc & 1)
{
crc = (crc >> 1) ^ poly;
}
else
{
crc >>= 1;
}
}
crc_tab[i] = crc;
}
}
Firmware Code where the functions above are called(Written in C):
//CommandPtr should now be pointing to the rest of the command
chksum_crc32gentab();
HardCRC = chksum_crc32( (unsigned)CommandPtr, strlen(CommandPtr));
printf("Hardware CRC val is %lu\n", HardCRC);
Note, the CommandPTR is a refrence to the same data named, "string input" in the software method.
Does anyone have any idea why I could be getting approximately double the value I am using in the software?? Aka HardCRC is double what its supposed to be, I am guessing it has something to do with my unsigned char cast.

Fastest way to calculate sum of bits in byte array

I have two byte arrays with the same length. I need to perform XOR operation between each byte and after this calculate sum of bits.
For example:
11110000^01010101 = 10100101 -> so 1+1+1+1 = 4
I need do the same operation for each element in byte array.
Use a lookup table. There are only 256 possible values after XORing, so it's not exactly going to take a long time. Unlike izb's solution though, I wouldn't suggest manually putting all the values in though - compute the lookup table once at startup using one of the looping answers.
For example:
public static class ByteArrayHelpers
{
private static readonly int[] LookupTable =
Enumerable.Range(0, 256).Select(CountBits).ToArray();
private static int CountBits(int value)
{
int count = 0;
for (int i=0; i < 8; i++)
{
count += (value >> i) & 1;
}
return count;
}
public static int CountBitsAfterXor(byte[] array)
{
int xor = 0;
foreach (byte b in array)
{
xor ^= b;
}
return LookupTable[xor];
}
}
(You could make it an extension method if you really wanted...)
Note the use of byte[] in the CountBitsAfterXor method - you could make it an IEnumerable<byte> for more generality, but iterating over an array (which is known to be an array at compile-time) will be faster. Probably only microscopically faster, but hey, you asked for the fastest way :)
I would almost certainly actually express it as
public static int CountBitsAfterXor(IEnumerable<byte> data)
in real life, but see which works better for you.
Also note the type of the xor variable as an int. In fact, there's no XOR operator defined for byte values, and if you made xor a byte it would still compile due to the nature of compound assignment operators, but it would be performing a cast on each iteration - at least in the IL. It's quite possible that the JIT would take care of this, but there's no need to even ask it to :)
Fastest way would probably be a 256-element lookup table...
int[] lut
{
/*0x00*/ 0,
/*0x01*/ 1,
/*0x02*/ 1,
/*0x03*/ 2
...
/*0xFE*/ 7,
/*0xFF*/ 8
}
e.g.
11110000^01010101 = 10100101 -> lut[165] == 4
This is more commonly referred to as bit counting. There are literally dozens of different algorithms for doing this. Here is one site which lists a few of the more well known methods. There are even CPU specific instructions for doing this.
Theorectically, Microsoft could add a BitArray.CountSetBits function that gets JITed with the best algorithm for that CPU architecture. I, for one, would welcome such an addition.
As I understood it you want to sum the bits of each XOR between the left and right bytes.
for (int b = 0; b < left.Length; b++) {
int num = left[b] ^ right[b];
int sum = 0;
for (int i = 0; i < 8; i++) {
sum += (num >> i) & 1;
}
// do something with sum maybe?
}
I'm not sure if you mean sum the bytes or the bits.
To sum the bits within a byte, this should work:
int nSum = 0;
for (int i=0; i<=7; i++)
{
nSum += (byte_val>>i) & 1;
}
You would then need the xoring, and array looping around this, of course.
The following should do
int BitXorAndSum(byte[] left, byte[] right) {
int sum = 0;
for ( var i = 0; i < left.Length; i++) {
sum += SumBits((byte)(left[i] ^ right[i]));
}
return sum;
}
int SumBits(byte b) {
var sum = 0;
for (var i = 0; i < 8; i++) {
sum += (0x1) & (b >> i);
}
return sum;
}
It can be rewritten as ulong and use unsafe pointer, but byte is easier to understand:
static int BitCount(byte num)
{
// 0x5 = 0101 (bit) 0x55 = 01010101
// 0x3 = 0011 (bit) 0x33 = 00110011
// 0xF = 1111 (bit) 0x0F = 00001111
uint count = num;
count = ((count >> 1) & 0x55) + (count & 0x55);
count = ((count >> 2) & 0x33) + (count & 0x33);
count = ((count >> 4) & 0xF0) + (count & 0x0F);
return (int)count;
}
A general function to count bits could look like:
int Count1(byte[] a)
{
int count = 0;
for (int i = 0; i < a.Length; i++)
{
byte b = a[i];
while (b != 0)
{
count++;
b = (byte)((int)b & (int)(b - 1));
}
}
return count;
}
The less 1-bits, the faster this works. It simply loops over each byte, and toggles the lowest 1 bit of that byte until the byte becomes 0. The castings are necessary so that the compiler stops complaining about the type widening and narrowing.
Your problem could then be solved by using this:
int Count1Xor(byte[] a1, byte[] a2)
{
int count = 0;
for (int i = 0; i < Math.Min(a1.Length, a2.Length); i++)
{
byte b = (byte)((int)a1[i] ^ (int)a2[i]);
while (b != 0)
{
count++;
b = (byte)((int)b & (int)(b - 1));
}
}
return count;
}
A lookup table should be the fastest, but if you want to do it without a lookup table, this will work for bytes in just 10 operations.
public static int BitCount(byte value) {
int v = value - ((value >> 1) & 0x55);
v = (v & 0x33) + ((v >> 2) & 0x33);
return ((v + (v >> 4) & 0x0F));
}
This is a byte version of the general bit counting function described at Sean Eron Anderson's bit fiddling site.

C# - Converting a Sequence of Numbers into Bytes

I am trying to send a UDP packet of bytes corresponding to the numbers 1-1000 in sequence. How do I convert each number (1,2,3,4,...,998,999,1000) into the minimum number of bytes required and put them in a sequence that I can send as a UDP packet?
I've tried the following with no success. Any help would be greatly appreciated!
List<byte> byteList = new List<byte>();
for (int i = 1; i <= 255; i++)
{
byte[] nByte = BitConverter.GetBytes((byte)i);
foreach (byte b in nByte)
{
byteList.Add(b);
}
}
for (int g = 256; g <= 1000; g++)
{
UInt16 st = Convert.ToUInt16(g);
byte[] xByte = BitConverter.GetBytes(st);
foreach (byte c in xByte)
{
byteList.Add(c);
}
}
byte[] sendMsg = byteList.ToArray();
Thank you.
You need to use :
BitConverter.GetBytes(INTEGER);
Think about how you are going to be able to tell the difference between:
260, 1 -> 0x1, 0x4, 0x1
1, 4, 1 -> 0x1, 0x4, 0x1
If you use one byte for numbers up to 255 and two bytes for the numbers 256-1000, you won't be able to work out at the other end which number corresponds to what.
If you just need to encode them as described without worrying about how they are decoded, it smacks to me of a contrived homework assignment or test, and I'm uninclined to solve it for you.
I think you are looking for something along the lines of a 7-bit encoded integer:
protected void Write7BitEncodedInt(int value)
{
uint num = (uint) value;
while (num >= 0x80)
{
this.Write((byte) (num | 0x80));
num = num >> 7;
}
this.Write((byte) num);
}
(taken from System.IO.BinaryWriter.Write(String)).
The reverse is found in the System.IO.BinaryReader class and looks something like this:
protected internal int Read7BitEncodedInt()
{
byte num3;
int num = 0;
int num2 = 0;
do
{
if (num2 == 0x23)
{
throw new FormatException(Environment.GetResourceString("Format_Bad7BitInt32"));
}
num3 = this.ReadByte();
num |= (num3 & 0x7f) << num2;
num2 += 7;
}
while ((num3 & 0x80) != 0);
return num;
}
I do hope this is not homework, even though is really smells like it.
EDIT:
Ok, so to put it all together for you:
using System;
using System.IO;
namespace EncodedNumbers
{
class Program
{
protected static void Write7BitEncodedInt(BinaryWriter bin, int value)
{
uint num = (uint)value;
while (num >= 0x80)
{
bin.Write((byte)(num | 0x80));
num = num >> 7;
}
bin.Write((byte)num);
}
static void Main(string[] args)
{
MemoryStream ms = new MemoryStream();
BinaryWriter bin = new BinaryWriter(ms);
for(int i = 1; i < 1000; i++)
{
Write7BitEncodedInt(bin, i);
}
byte[] data = ms.ToArray();
int size = data.Length;
Console.WriteLine("Total # of Bytes = " + size);
Console.ReadLine();
}
}
}
The total size I get is 1871 bytes for numbers 1-1000.
Btw, could you simply state whether or not this is homework? Obviously, we will still help either way. But we would much rather you try a little harder so you can actually learn for yourself.
EDIT #2:
If you want to just pack them in ignoring the ability to decode them back, you can do something like this:
protected static void WriteMinimumInt(BinaryWriter bin, int value)
{
byte[] bytes = BitConverter.GetBytes(value);
int skip = bytes.Length-1;
while (bytes[skip] == 0)
{
skip--;
}
for (int i = 0; i <= skip; i++)
{
bin.Write(bytes[i]);
}
}
This ignores any bytes that are zero (from MSB to LSB). So for 0-255 it will use one byte.
As states elsewhere, this will not allow you to decode the data back since the stream is now ambiguous. As a side note, this approach crams it down to 1743 bytes (as opposed to 1871 using 7-bit encoding).
A byte can only hold 256 distinct values, so you cannot store the numbers above 255 in one byte. The easiest way would be to use short, which is 16 bits. If you realy need to conserve space, you can use 10 bit numbers and pack that into a byte array ( 10 bits = 2^10 = 1024 possible values).
Naively (also, untested):
List<byte> bytes = new List<byte>();
for (int i = 1; i <= 1000; i++)
{
byte[] nByte = BitConverter.GetBytes(i);
foreach(byte b in nByte) bytes.Add(b);
}
byte[] byteStream = bytes.ToArray();
Will give you a stream of bytes were each group of 4 bytes is a number [1, 1000].
You might be tempted to do some work so that i < 256 take a single byte, i < 65535 take two bytes, etc. However, if you do this you can't read the values out of the stream. Instead, you'd add length encoding or sentinels bits or something of the like.
I'd say, don't. Just compress the stream, either using a built-in class, or gin up a Huffman encoding implementation using an agree'd upon set of frequencies.

Categories