How to convert given CRC16 C algorithm to C# algorithm - c#

I'm a junior developer.
I got a converting problem with below CRC16 check algorithm.
I have to convert below C/C++ CRC-16 algorithm to C# algorithm.
here is CRC-16 algorithm.
unsigned short Crc16(unsigned char* rdata, unsigned int len){
int i, n;
unsigned short wCh, wCrc = 0XFFFF;
for (i = 0; i < len; i++){
wCh = (uword)*(rdata + i);
for (n = 0; n < 8; n++){
if ((wCh^wCrc) & 0x0001)
wCrc = (wCrc >> 1) ^ 0xA001;
else
wCrc >>= 1;
wCh >>= 1;
}
}
return wCrc;
}
I got a stuck at this problem.
I tried to convert this algorithm directly on my C#(winform) project but can not solve type matching problem.
(ex, unsigned => ushort, unsigned char* => ???? 'I have no idea')
also, I tried to implement above code as a DLL and then import DLL file on my C# project. But still can not solve type matching problem.
(ex, [DllImport("Crc_dll.dll")] public static extern ushort Crc16(unsigned char* rdata, unsigned int len); => how to convert unsigned char*, unsigned int??)
If anybody know, could you help me please?
Since above algorithm is given from client, I can't use other crc16 algorithms.

unsigned char* rdata should just be a byte[] rdata in C#. Also an integer is not implicitly converted to bool in C# (#1), and wide results are not implicitly converted to a narrow destination type (#2).
ushort Crc16(byte[] rdata, int len){
int i, n;
ushort wCh, wCrc = 0XFFFF;
for (i = 0; i < len; i++){
wCh = rdata[i];
for (n = 0; n < 8; n++){
if (((wCh^wCrc) & 0x0001) != 0) // #1
wCrc = (ushort)((wCrc >> 1) ^ 0xA001); // #2
else
wCrc >>= 1;
wCh >>= 1;
}
}
return wCrc;
}

Related

Node.js to c# converting

I have this function is Node.js
// #param {BigInteger} checksum
// #returns {Uint8Array}
function checksumToUintArray(checksum) {
var result = new Uint8Array(8);
for (var i = 0; i < 8; ++i) {
result[7 - i] = checksum.and(31).toJSNumber();
checksum = checksum.shiftRight(5);
}
return result;
}
What would be the equivalent in c#?
I'm thinking:
public static uint[] ChecksumToUintArray(long checksum)
{
var result = new uint[8];
for (var i = 0; i < 8; ++i)
{
result[7 - i] = (uint)(checksum & 31);
checksum = checksum >> 5;
}
return result;
}
But I'm no sure.
My main dilemma is the "BigInteger" type (but not only).
Any help would be appropriated.
UInt8 is "unsigned 8-bit integer". In C# that's byte, because uint is "unsigned 32-bit integer". So UInt8Array is byte[].
Javascript BigInteger corresponds to C# BigInteger (from System.Numerics dll or nuget package), not to long. In some cases, long might be enough. For example, if BigInteger in javascript algorithm is used only because there is no such thing as 64bit integer in javascript - then it's fine to replace it with long in C#. But in general, without any additional information about expected ranges - range of javascript BigInteger is much bigger than range of C# long.
Knowing that, your method becomes:
public static byte[] ChecksumToUintArray(BigInteger checksum) {
var result = new byte[8];
for (var i = 0; i < 8; ++i) {
result[7 - i] = (byte) (checksum & 31);
checksum = checksum >> 5;
}
return result;
}

Store multiple chars in a long and recover them

The following code is used to compact multiple values in a long. The long is used as a key in a C++ unordered_map. It allows me to use the map with a number instead of a complex structure and ifs on each properties. The map searching to be as efficient as possible.
DWORD tmpNo = object->room->details->No;
unsigned char compactNo = tmpNo ;
unsigned __int16 smallX = object->x;
unsigned __int16 smallY = object->y;
unsigned __int64 longCode = 0;
longCode = (item->code[0] << 56) |
(item->code[1] << 48) |
(item->code[2] << 40) |
(compactNo << 32) |
(smallX << 24) |
(smallY << 8);
Am I using the | operator correctly here ?
To recover the values, I tryed :
unsigned char c0 = key >> 56;
unsigned char c1 = key >> 48;
unsigned char c2 = key >> 40;
etc, but it didn't work.
Is it because the original item->code chars are chars and not unsigned chars (the values are always positive though) ?
Also, in an ideal world, the long's values would be recovered in a .NET DLL. Is it possible to do so in C# ?
C# has a byte type for an 8-bit value, but otherwise the logic is similar.
Your | logic looks fine (except you should be shifting smallX by 16 and smallY by 0)
It would help if you gave a complete example.
But assuming that item->code[0] is a char or int (signed or unsigned), you need to convert it to a 64 bit type before shifting, otherwise you end up with undefined behaviour, and the wrong answer.
Something like
((unsigned __int64) item->code[0]) << 56
should work better.
I think that stdint.h is very useful to understand this kind of implementation (sized integers are very meaningful). So here's the code:
#include <stdio.h>
#include <stdint.h>
int8_t getValue8(int index, uint64_t container) {
return (uint8_t)((container >> (index * 8)) & 0XFF);
}
void setValue8(int index, uint64_t* container, uint8_t value) {
// get left part of container including the last byte (cleared by ~0xFF mask) to be used by value
int shift = index * 8;
uint64_t mask = (uint64_t) ~0xFF;
uint64_t left = (*container >> shift) & mask;
left = (left | value) << shift;
// right part of container (complement)
mask = ((uint64_t)1 << ++shift) - 1;
uint64_t right = *container & mask;
// update container
*container = left | right;
}
int main() {
uint64_t* container; // container: can contain 8 chars (64-bit sized container)
uint64_t containerValue = 0;
int n = 8; // n value must be <= 8 considering a 64-bit sized container
uint8_t chars[n]; // eight char values to be stored
// add/set values to container
container = &containerValue;
int i;
for (i = 0; i < n; ++i) {
chars[i] = (uint8_t)((i+1)*10);
setValue8(i, container, chars[i]);
printf("setValue8(%d, container, %d)\n", i, chars[i]);
}
// get values from container
for (i = 0; i < n; ++i) {
printf("getValue8(%d, container)=%d\n", i, getValue8(i, *container));
}
return 0;
}
The code use only bit masks and some bitwise operations, and so you can easily port it to C#. If you have any questions about it just ask. I hope I have been helpful.

Converting Ada code to C#

So I need help converting this ada code int c#, it's basically a checksum algorithm.
ADA:
CHECKSUM_VALUE := ((ROTATE_LEFT_1_BIT(CHECKSUM_VALUE)) xor (CURRENT_VALUE));
This is what I could come up with:
C#:
checksum = RotateLeft(checksum, rotateCount, sizeof(ushort) * 8) ^ word;
RotateLeft Function:
public static int RotateLeft(int value, ushort rotateCount, int dataSize)
{
return (value << rotateCount) | (value >> (dataSize - rotateCount));
}
However when comparing the checksum results from the ada and C# algorithms, they do not match so I think my conversion isn't correct, anyone who has used ada before can give some input would be really helpful.
Thanks
The issue seems to be with the C# and perhaps not with your interpretation of the ADA code. If you are truly rotating a 16 bit unsigned number as your post is implying, then you will need to mask the upper 2 bytes of the resulting integer value so that they do not contribute to the answer. Casting an uint x to ushort in C# will do the equivalent of x & 0x0000FFFF
public static ushort RotateLeft(ushort value, int count)
{
int left = value << count;
int right = value >> (16 - count);
return (ushort)(left | right);
}
This answer is in C, since I don’t have a C# compiler.
You have value as an int, which is signed, so that a right shift extends the sign bit into the vacated space; so in (value << rotateCount) | (value >> (dataSize - rotateCount)), the right-hand half ((value >> (dataSize - rotateCount))) needs to have the top bits masked off. And I don’t know why you need dataSize, isn’t it sizeof(value)?
I think a better solution would be to use unsigned, so that a right shift introduces zeros into the vacated space.
#include <stdio.h>
unsigned rotateLeft(unsigned value, int by) {
const unsigned bits = sizeof(value) * 8;
return (value << by) | (value >> (bits - by));
}
int main() {
unsigned input = 0x52525252;
unsigned result = input;
printf("input: %x\n", input);
{
int j;
for (j = 0; j < 8; j++) {
result = rotateLeft(result, 1);
printf("result: %x\n", result);
}
}
return 0;
}
The output is
input: 52525252
result: a4a4a4a4
result: 49494949
result: 92929292
result: 25252525
result: 4a4a4a4a
result: 94949494
result: 29292929
result: 52525252

convert from BitArray to 16-bit unsigned integer in c#

BitArray bits=new BitArray(16); // size 16-bit
There is bitArray and I want to convert 16-bit from this array to unsigned integer in c# ,
I can not use copyto for convert, is there other method for convert from 16-bit to UInt16?
You can do it like this:
UInt16 res = 0;
for (int i = 0 ; i < 16 ; i++) {
if (bits[i]) {
res |= (UInt16)(1 << i);
}
}
This algorithm checks the 16 least significant bits one by one, and uses the bitwise OR operation to set the corresponding bit of the result.
You can loop through it and compose the value itself.
var bits = new BitArray(16);
bits[1] = true;
var value = 0;
for (int i = 0; i < bits.Length; i++)
{
if (lBits[i])
{
value |= (1 << i);
}
}
This should do the work
private uint BitArrayToUnSignedInt(BitArray bitArray)
{
ushort res = 0;
for(int i= bitArray.Length-1; i != 0;i--)
{
if (bitArray[i])
{
res = (ushort)(res + (ushort) Math.Pow(2, bitArray.Length- i -1));
}
}
return res;
}
You can check this another anwser already in stackoverflow of that question:
Convert bit array to uint or similar packed value

Data conversion issue possibly, char to unsigned char. A software and firmware CRC32 interaction issue

My current issue is that I am computing a CRC32 hash in software and then checking it in the firmware, however when I compute the hash in firmware its double what it is supposed to be.
software(written in C#):
public string SCRC(string input)
{
//Calculate CRC-32
Crc32 crc32 = new Crc32();
string hash = "";
byte[] convert = Encoding.ASCII.GetBytes(input);
MemoryStream ms = new MemoryStream(System.Text.Encoding.Default.GetBytes(input));
foreach (byte b in crc32.ComputeHash(ms))
hash += b.ToString("x2").ToLower();
return hash;
}
firmware functions used(written in C):
unsigned long chksum_crc32 (unsigned char *block, unsigned int length)
{
register unsigned long crc;
unsigned long i;
crc = 0xFFFFFFFF;
for (i = 0; i < length; i++)
{
crc = ((crc >> 8) & 0x00FFFFFF) ^ crc_tab[(crc ^ *block++) & 0xFF];
}
return (crc ^ 0xFFFFFFFF);
}
/* chksum_crc32gentab() -- to a global crc_tab[256], this one will
* calculate the crcTable for crc32-checksums.
* it is generated to the polynom [..]
*/
void chksum_crc32gentab ()
{
unsigned long crc, poly;
int i, j;
poly = 0xEDB88320L;
for (i = 0; i < 256; i++)
{
crc = i;
for (j = 8; j > 0; j--)
{
if (crc & 1)
{
crc = (crc >> 1) ^ poly;
}
else
{
crc >>= 1;
}
}
crc_tab[i] = crc;
}
}
Firmware Code where the functions above are called(Written in C):
//CommandPtr should now be pointing to the rest of the command
chksum_crc32gentab();
HardCRC = chksum_crc32( (unsigned)CommandPtr, strlen(CommandPtr));
printf("Hardware CRC val is %lu\n", HardCRC);
Note, the CommandPTR is a refrence to the same data named, "string input" in the software method.
Does anyone have any idea why I could be getting approximately double the value I am using in the software?? Aka HardCRC is double what its supposed to be, I am guessing it has something to do with my unsigned char cast.

Categories