Converting small C# checksum program into Java - c#

I'm trying to build a simple ground control station for an RC airplane. I've almost finished it, but I'm having a LOT of trouble with the checksum calculation. I understand that the data types of Java and C# are different. I've attempted to account for that but I'm not sure I've succeeded. The program utilizes the CRC-16-CCITT method.
Here is my port:
public int crc_accumulate(int b, int crc) {
int ch = (b ^ (crc & 0x00ff));
ch = (ch ^ (ch << 4));
return ((crc >> 8) ^ (ch << 8) ^ (ch << 3) ^ (ch >> 4));
}
public byte[] crc_calculate() {
int[] pBuffer=new int[]{255,9,19,1,1,0,0,0,0,0,2,3,81,4,3};
int crcEx=0;
int clength=pBuffer.length;
int[] X25_INIT_CRC=new int[]{255,255};
byte[] crcTmp=new byte[]{(byte)255,(byte)255};
int crcTmp2 = ((crcTmp[0] & 0xff) << 8) | (crcTmp[1] & 0xff);
crcTmp[0]=(byte)crcTmp2;
crcTmp[1]=(byte)(crcTmp2 >> 8);
System.out.println("pre-calculation: 0x"+Integer.toHexString((crcTmp[0]&0xff))+" 0x"+Integer.toHexString((crcTmp[1]&0xff))+"; ushort: "+crcTmp2);
if (clength < 1) {
System.out.println("clength < 1");
return crcTmp;
}
for (int i=1; i<clength; i++) {
crcTmp2 = crc_accumulate(pBuffer[i], crcTmp2);
}
crcTmp[0]=(byte)crcTmp2;
crcTmp[1]=(byte)(crcTmp2 >> 8);
System.out.print("crc calculation: 0x"+Integer.toHexString((crcTmp[0]&0xff))+" 0x"+Integer.toHexString((crcTmp[1]&0xff))+"; ushort: "+crcTmp2);
if (crcEx!=-1) {
System.out.println(" extraCRC["+crcEx+"]="+extraCRC[crcEx]);
crcTmp2=crc_accumulate(extraCRC[crcEx], crcTmp2);
crcTmp[0]=(byte)crcTmp2;
crcTmp[1]=(byte)(crcTmp2 >> 8);
System.out.println("with extra CRC: 0x"+Integer.toHexString((crcTmp[0]&0xff))+" 0x"+Integer.toHexString((crcTmp[1]&0xff))+"; ushort: "+crcTmp2+"\n\n");
}
return crcTmp;
}
This is the original C# file:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ArdupilotMega
{
class MavlinkCRC
{
const int X25_INIT_CRC = 0xffff;
const int X25_VALIDATE_CRC = 0xf0b8;
public static ushort crc_accumulate(byte b, ushort crc)
{
unchecked
{
byte ch = (byte)(b ^ (byte)(crc & 0x00ff));
ch = (byte)(ch ^ (ch << 4));
return (ushort)((crc >> 8) ^ (ch << 8) ^ (ch << 3) ^ (ch >> 4));
}
}
public static ushort crc_calculate(byte[] pBuffer, int length)
{
if (length < 1)
{
return 0xffff;
}
// For a "message" of length bytes contained in the unsigned char array
// pointed to by pBuffer, calculate the CRC
// crcCalculate(unsigned char* pBuffer, int length, unsigned short* checkConst) < not needed
ushort crcTmp;
int i;
crcTmp = X25_INIT_CRC;
for (i = 1; i < length; i++) // skips header U
{
crcTmp = crc_accumulate(pBuffer[i], crcTmp);
//Console.WriteLine(crcTmp + " " + pBuffer[i] + " " + length);
}
return (crcTmp);
}
}
}
I'm quite sure that the problem in my port lies between lines 1 and 5. I expect to get an output of 0x94 0x88, but instead the program outputs 0x2D 0xF4.
I would greatly appreciate it if someone could show me where I've gone wrong.
Thanks for any help,
Cameron

Alright, for starters lets clean up the C# code a little:
const int X25_INIT_CRC = 0xffff;
public static ushort crc_accumulate(byte b, ushort crc)
{
unchecked
{
byte ch = (byte)(b ^ (byte)(crc & 0x00ff));
ch = (byte)(ch ^ (ch << 4));
return (ushort)((crc >> 8) ^ (ch << 8) ^ (ch << 3) ^ (ch >> 4));
}
}
public static ushort crc_calculate(byte[] pBuffer)
{
ushort crcTmp = X25_INIT_CRC;
for (int i = 1; i < pBuffer.Length; i++) // skips header U
crcTmp = crc_accumulate(pBuffer[i], crcTmp);
return crcTmp;
}
Now the biggest problem here is that there are no unsigned numeric types in Java, so you have to work around that by using the next bigger numeric type instead of ushort and byte and masking off the high bits as needed. You can also just drop the unchecked because Java has no overflow checking anyway. The end result is something like this:
public static final int X25_INIT_CRC = 0xffff;
public static int crc_accumulate(short b, int crc) {
short ch = (short)((b ^ crc) & 0xff);
ch = (short)((ch ^ (ch << 4)) & 0xff);
return ((crc >> 8) ^ (ch << 8) ^ (ch << 3) ^ (ch >> 4)) & 0xffff;
}
public static int crc_calculate(short[] pBuffer) {
int crcTmp = X25_INIT_CRC;
for (int i = 1; i < pBuffer.length; i++) // skips header U
crcTmp = crc_accumulate(pBuffer[i], crcTmp);
return crcTmp;
}
For the input in your question ({ 255, 9, 19, 1, 1, 0, 0, 0, 0, 0, 2, 3, 81, 4, 3 }) the original C#, cleaned up C# and Java all produce 0xfc7e.

Related

Fast byte copy C++11

I need to convert C# app which uses extensively bytes manipulation.
An example:
public abstract class BinRecord
{
public static int version => 1;
public virtual int LENGTH => 1 + 7 + 8 + 2 + 1; // 19
public char type;
public ulong timestamp; // 7 byte
public double p;
public ushort size;
public char callbackType;
public virtual void FillBytes(byte[] bytes)
{
bytes[0] = (byte)type;
var t = BitConverter.GetBytes(timestamp);
Buffer.BlockCopy(t, 0, bytes, 1, 7);
Buffer.BlockCopy(BitConverter.GetBytes(p), 0, bytes, 8, 8);
Buffer.BlockCopy(BitConverter.GetBytes(size), 0, bytes, 16, 2);
bytes[18] = (byte)callbackType;
}
}
Basically BitConverter and Buffer.BlockCopy called 100s times per sec.
There are several classes that inherit from the base class above doing more specific tasks. For example:
public class SpecRecord : BinRecord
{
public override int LENGTH => base.LENGTH + 2;
public ushort num;
public SpecRecord() { }
public SpecRecord(ushort num)
{
this.num = num;
}
public override void FillBytes(byte[] bytes)
{
var idx = base.LENGTH;
base.FillBytes(bytes);
Buffer.BlockCopy(BitConverter.GetBytes(num), 0, bytes, idx + 0, 2);
}
}
What approach in C++ should I look into?
Best option, in my opinion, is to actually go to C - use memcpy to copy over the bytes of any object.
Your above code would then be re-written as follows:
void FillBytes(uint8_t* bytes)
{
bytes[0] = (uint8_t)type;
memcpy((bytes + 1), &t, sizeof(uint64_t) - 1);
memcpy((bytes + 8), &p, sizeof(double));
memcpy((bytes + 16), &size, sizeof(uint16_t));
bytes[18] = (uint8_t)callbackType;
}
Here, I use uint8_t, uint16_t, and uint64_t as replacements for the byte, ushort, and ulong types.
Keep in mind, your timestamp copy is not portable to a big-endian CPU - it will cut off the lowest byte rather than the highest. Solving that would require copying in each byte manually, like so:
//Copy a 7 byte timestamp into the buffer.
bytes[1] = (t >> 0) & 0xFF;
bytes[2] = (t >> 8) & 0xFF;
bytes[3] = (t >> 16) & 0xFF;
bytes[4] = (t >> 24) & 0xFF;
bytes[5] = (t >> 32) & 0xFF;
bytes[6] = (t >> 40) & 0xFF;
bytes[7] = (t >> 48) & 0xFF;

Porting Bitwise Operations from C# To C

I need some help porting this C# code over to C. I have it working in C# just fine but I'm getting the wrong return in C. Should I be breaking down the bit shifting into separate lines? I thought I had an issue with the data types but I think I have the right ones. Here was the working code that returns 0x03046ABE
UInt32 goHigh(UInt32 x) { return (UInt32)(x & 0xFFFF0000); }
UInt32 goLow(UInt32 x) { return (UInt32)(x & 0xFFFF); }
UInt32 magic(UInt32 pass){
UInt32 key = pass;
UInt16 num = 0x0563;
key = (goLow(key) << 16) | (UInt16)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
return key; //returns 0x03046ABE
}
magic(0x01020304);
This was the incorrect C code that I'm trying to get working
unsigned long goHigh(unsigned long x) {
return (unsigned long )(x & 0xFFFF0000); }
unsigned long goLow(unsigned long x) {
return (unsigned long )(x & 0xFFFF); }
unsigned long magic(unsigned long pass){
unsigned long key = pass;
unsigned int num = 0x0563;
key = (goLow(key) << 16) | (unsigned int)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
return key;
}
magic(0x01020304); //returns 0xb8c6a8e
Most likely problem is here:
key = (goLow(key) << 16) | (unsigned int)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
^^^^^^^^^^^^
which you expect is 16-bit. It may be larger on different machines. Same with unsigned long, which may be 64-bit instead of 32, as you expect.
To be sure, use uint32_t & uint16_t. You have to #include <stdint.h> to be able to use them.
long and int are not the sizes you expect on your platform (32 and 16 bits respectively)
Replace the primitive types with the actual sizes and it will be the same output. I've also removed redundant casts.
These types can be found in stdint.h
#include <stdint.h>
uint32_t goHigh(uint32_t x) {
return (x & 0xFFFF0000);
}
uint32_t goLow(uint32_t x) {
return (x & 0xFFFF);
}
uint32_t magic(uint32_t pass) {
uint32_t key = pass;
uint32_t num = 0x0563;
key = (goLow(key) << 16) | (uint16_t)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
return key;
}

C# More simple way of converting an integer to a string in binary form without Convert.ToString method.?

I have a code that turns an integer to its binary representation, but I was wondering if there is a more simple or "easier" way of doing so. I know that there is a built-in method in C# that does this for you automatically, but that is not what I want to use.
This version loops over each o the 32-bit positions while writing ones and zeros and uses TrimStart to remove leading zeroes.
For example, converting the integer 10 to its string representation in
binary as "1010".
static string IntToBinary(int n)
{
char[] b = new char[32];
int pos = 31;
int i = 0;
while (i < 32) // Loops over each of the 32-bit positions while writing ones and zeros.
{
if ((n & (1 << i)) != 0)
{
b[pos] = '1';
}
else
{
b[pos] = '0';
}
pos--;
i++;
}
return new string(b).TrimStart('0'); // TrimStart removes leading zeroes.
}
static void Main()
{
Console.WriteLine(IntToBinary(300));
}
I suppose you could use a nibble lookup table:
static string[] nibbles = {
"0000", "0001", "0010", "0011",
"0100", "0101", "0110", "0111",
"1000", "1001", "1010", "1011",
"1100", "1101", "1110", "1111"
};
public static string IntToBinary(int n)
{
return
nibbles[(n >> 28) & 0xF] +
nibbles[(n >> 24) & 0xF] +
nibbles[(n >> 20) & 0xF] +
nibbles[(n >> 16) & 0xF] +
nibbles[(n >> 12) & 0xF] +
nibbles[(n >> 8) & 0xF] +
nibbles[(n >> 4) & 0xF] +
nibbles[(n >> 0) & 0xF]
.TrimStart('0');
}
Here is a simple LINQ implementation:
static string IntToBinary(int n)
{
return string.Concat(Enumerable.Range(0, 32)
.Select(i => (n & (1 << (31 - i))) != 0 ? '1' : '0')
.SkipWhile(ch => ch == '0'));
}
Another one using for loop:
static string IntToBinary(int n)
{
var chars = new char[32];
int start = chars.Length;
for (uint bits = (uint)n; bits != 0; bits >>= 1)
chars[--start] = (char)('0' + (bits & 1));
return new string(chars, start, chars.Length - start);
}

Mod bus communication "crc function" C#

I try to make a modbus app but I need a little help fit the automatic CRC function.
The problem is: When i try to read the CRC[1] and CRC[0] to put the contained value at the end of my hex string I get an compilation error.
I use this function for the crc:
#region CRC Computation
void GetCRC(byte[] comBuffer, ref byte[] CRC)
{
//Function expects a modbus message of any length as well as a 2 byte CRC array in which to
//return the CRC values:
ushort CRCFull = 0xFFFF;
byte CRCHigh = 0xFF, CRCLow = 0xFF;
char CRCLSB;
for (int i = 0; i < (comBuffer.Length) - 2; i++)
{
CRCFull = (ushort)(CRCFull ^ comBuffer[i]);
for (int j = 0; j < 8; j++)
{
CRCLSB = (char)(CRCFull & 0x0001);
CRCFull = (ushort)((CRCFull >> 1) & 0x7FFF);
if (CRCLSB == 1)
CRCFull = (ushort)(CRCFull ^ 0xA001);
}
}
CRC[1] = CRCHigh = (byte)((CRCFull >> 8) & 0xFF);
CRC[0] = CRCLow = (byte)(CRCFull & 0xFF);
}
#endregion
And I want to use CRC [1] and CRC [0] at the end of my string, how can i use in the following code:
comPort.Write(newMsg, 0, newMsg.Length, CRC[1], CRC[0]);
case TransmissionType.Hex:
try
{
//convert the message to byte array
byte[] newMsg = HexToByte(msg);
//send the message to the port
comPort.Write(newMsg, 0, newMsg.Length, CRC[1], CRC[0]);
//convert back to hex and display
DisplayData(MessageType.Outgoing, ByteToHex(newMsg)+ "\n");
}

C# CRC implementation

I am trying to integrate a Serial-port device into my application, which needs CRC-CCTT validation for the bytes that I send to it.
I'm kinda new into managing byte packets, and need help for this.
It uses this formula for making the CRC calculus:
[CRC-CCITT P(X)= X16 + C12 + C8 + 1]
So for example for the packet: 0xFC 0x05 0x11, the CRC is 0x5627.
Then I send this packet to the device: 0xFC 0x05 0x11 0x27 0x56
Also, packet lenghts will vary from 5 to 255 (including CRC checks bytes)
I don't know how to implement this, so any idea/suggestions will be welcome.
Hope I made myself clear,
Thanks in Advance.
EDIT:
here is the specification of what I need to do:
standard crc-ccitt is x16 + x12 + x5 + 1 I wrote the one # http://www.sanity-free.com/133/crc_16_ccitt_in_csharp.html If I have time I'll see if I can't modify it to run with the x16 + x12 + x8 + 1 poly.
EDIT:
here you go:
public class Crc16CcittKermit {
private static ushort[] table = {
0x0000, 0x1189, 0x2312, 0x329B, 0x4624, 0x57AD, 0x6536, 0x74BF,
0x8C48, 0x9DC1, 0xAF5A, 0xBED3, 0xCA6C, 0xDBE5, 0xE97E, 0xF8F7,
0x1081, 0x0108, 0x3393, 0x221A, 0x56A5, 0x472C, 0x75B7, 0x643E,
0x9CC9, 0x8D40, 0xBFDB, 0xAE52, 0xDAED, 0xCB64, 0xF9FF, 0xE876,
0x2102, 0x308B, 0x0210, 0x1399, 0x6726, 0x76AF, 0x4434, 0x55BD,
0xAD4A, 0xBCC3, 0x8E58, 0x9FD1, 0xEB6E, 0xFAE7, 0xC87C, 0xD9F5,
0x3183, 0x200A, 0x1291, 0x0318, 0x77A7, 0x662E, 0x54B5, 0x453C,
0xBDCB, 0xAC42, 0x9ED9, 0x8F50, 0xFBEF, 0xEA66, 0xD8FD, 0xC974,
0x4204, 0x538D, 0x6116, 0x709F, 0x0420, 0x15A9, 0x2732, 0x36BB,
0xCE4C, 0xDFC5, 0xED5E, 0xFCD7, 0x8868, 0x99E1, 0xAB7A, 0xBAF3,
0x5285, 0x430C, 0x7197, 0x601E, 0x14A1, 0x0528, 0x37B3, 0x263A,
0xDECD, 0xCF44, 0xFDDF, 0xEC56, 0x98E9, 0x8960, 0xBBFB, 0xAA72,
0x6306, 0x728F, 0x4014, 0x519D, 0x2522, 0x34AB, 0x0630, 0x17B9,
0xEF4E, 0xFEC7, 0xCC5C, 0xDDD5, 0xA96A, 0xB8E3, 0x8A78, 0x9BF1,
0x7387, 0x620E, 0x5095, 0x411C, 0x35A3, 0x242A, 0x16B1, 0x0738,
0xFFCF, 0xEE46, 0xDCDD, 0xCD54, 0xB9EB, 0xA862, 0x9AF9, 0x8B70,
0x8408, 0x9581, 0xA71A, 0xB693, 0xC22C, 0xD3A5, 0xE13E, 0xF0B7,
0x0840, 0x19C9, 0x2B52, 0x3ADB, 0x4E64, 0x5FED, 0x6D76, 0x7CFF,
0x9489, 0x8500, 0xB79B, 0xA612, 0xD2AD, 0xC324, 0xF1BF, 0xE036,
0x18C1, 0x0948, 0x3BD3, 0x2A5A, 0x5EE5, 0x4F6C, 0x7DF7, 0x6C7E,
0xA50A, 0xB483, 0x8618, 0x9791, 0xE32E, 0xF2A7, 0xC03C, 0xD1B5,
0x2942, 0x38CB, 0x0A50, 0x1BD9, 0x6F66, 0x7EEF, 0x4C74, 0x5DFD,
0xB58B, 0xA402, 0x9699, 0x8710, 0xF3AF, 0xE226, 0xD0BD, 0xC134,
0x39C3, 0x284A, 0x1AD1, 0x0B58, 0x7FE7, 0x6E6E, 0x5CF5, 0x4D7C,
0xC60C, 0xD785, 0xE51E, 0xF497, 0x8028, 0x91A1, 0xA33A, 0xB2B3,
0x4A44, 0x5BCD, 0x6956, 0x78DF, 0x0C60, 0x1DE9, 0x2F72, 0x3EFB,
0xD68D, 0xC704, 0xF59F, 0xE416, 0x90A9, 0x8120, 0xB3BB, 0xA232,
0x5AC5, 0x4B4C, 0x79D7, 0x685E, 0x1CE1, 0x0D68, 0x3FF3, 0x2E7A,
0xE70E, 0xF687, 0xC41C, 0xD595, 0xA12A, 0xB0A3, 0x8238, 0x93B1,
0x6B46, 0x7ACF, 0x4854, 0x59DD, 0x2D62, 0x3CEB, 0x0E70, 0x1FF9,
0xF78F, 0xE606, 0xD49D, 0xC514, 0xB1AB, 0xA022, 0x92B9, 0x8330,
0x7BC7, 0x6A4E, 0x58D5, 0x495C, 0x3DE3, 0x2C6A, 0x1EF1, 0x0F78
};
public static ushort ComputeChecksum( params byte[] buffer ) {
if ( buffer == null ) throw new ArgumentNullException( );
ushort crc = 0;
for ( int i = 0; i < buffer.Length; ++i ) {
crc = (ushort)( ( crc >> 8 ) ^ table[( crc ^ buffer[i] ) & 0xff] );
}
return crc;
}
public static byte[] ComputeChecksumBytes( params byte[] buffer ) {
return BitConverter.GetBytes( ComputeChecksum( buffer ) );
}
}
sample:
ushort crc = Crc16CcittKermit.ComputeChecksum( 0xFC, 0x05, 0x11 );
byte[] crcBuffer = Crc16CcittKermit.ComputeChecksumBytes( 0xFC, 0x05, 0x11 )
// crc = 0x5627
// crcBuffer = { 0x27, 0x56 }
Have you tried Googling for an example? There are many of them.
Example 1: http://tomkaminski.com/crc32-hashalgorithm-c-net
Example 2: http://www.sanity-free.com/12/crc32_implementation_in_csharp.html
You also have native MD5 support in .Net through System.Security.Cryptography.MD5CryptoServiceProvider.
EDIT:
If you are looking for an 8-bit algorithm: http://www.codeproject.com/KB/cs/csRedundancyChckAlgorithm.aspx
And 16-bit: http://www.sanity-free.com/133/crc_16_ccitt_in_csharp.html
LOL, I've encountered exactly the same STATUS REQUEST sequense, i'm currently developing software to use with CashCode Bill Validator:). Here's the code worked for me, it's CRC16-CCITT with reversed polynomial equals 0x8408 (BDPConstants.Polynomial in the code). That's the code worked for me:
// TableCRC16Size is 256 of course, don't forget to set in somewhere
protected ushort[] TableCRC16 = new ushort[BDPConstants.TableCRC16Size];
protected void InitCRC16Table()
{
for (ushort i = 0; i < BDPConstants.TableCRC16Size; ++i)
{
ushort CRC = 0;
ushort c = i;
for (int j = 0; j < 8; ++j)
{
if (((CRC ^ c) & 0x0001) > 0)
CRC = (ushort)((CRC >> 1) ^ BDPConstants.Polynominal);
else
CRC = (ushort)(CRC >> 1);
c = (ushort)(c >> 1);
}
TableCRC16[i] = CRC;
}
}
protected ushort CalcCRC16(byte[] aData)
{
ushort CRC = 0;
for (int i = 0; i < aData.Length; ++i)
CRC = (ushort)(TableCRC16[(CRC ^ aData[i]) & 0xFF] ^ (CRC >> 8));
return CRC;
}
Initialize the table somewhere (e.g. Form constructor):
InitCRC16Table();
then use it in your code just like that,
You can use List of bytes instead of array, more convinient to pack byte data in the 'packet' for sending
uint CRC = CalcCRC16(byte[] aByte)
// You need to split your CRC in two bytes of course
byte CRCHW = (byte)((CRC) / 256); // that's your 0x56
byte CRCLW = (byte)(CRC); // that's your 0x27
it works and dose not need table:
/// <summary>
/// Gens the CRC16.
/// CRC-1021 = X(16)+x(12)+x(5)+1
/// </summary>
/// <param name="c">The c.</param>
/// <param name="nByte">The n byte.</param>
/// <returns>System.Byte[][].</returns>
public ushort GenCrc16(byte[] c, int nByte)
{
ushort Polynominal = 0x1021;
ushort InitValue = 0x0;
ushort i, j, index = 0;
ushort CRC = InitValue;
ushort Remainder, tmp, short_c;
for (i = 0; i < nByte; i++)
{
short_c = (ushort)(0x00ff & (ushort) c[index]);
tmp = (ushort)((CRC >> 8) ^ short_c);
Remainder = (ushort)(tmp << 8);
for (j = 0; j < 8; j++)
{
if ((Remainder & 0x8000) != 0)
{
Remainder = (ushort)((Remainder << 1) ^ Polynominal);
}
else
{
Remainder = (ushort)(Remainder << 1);
}
}
CRC = (ushort)((CRC << 8) ^ Remainder);
index++;
}
return CRC;
}
You are actually using CRC-XMODEM LSB-reverse (with 0x8408 coefficient). C# code for this calculus is:
public void crc_bytes(int[] int_input)
{
int_array = int_input;
int int_crc = 0x0; // or 0xFFFF;
int int_lsb;
for (int int_i = 0; int_i < int_array.Length; int_i++)
{
int_crc = int_crc ^ int_array[int_i];
for (int int_j = 0; int_j < 8; int_j ++ )
{
int_lsb = int_crc & 0x0001; // Mask of LSB
int_crc = int_crc >> 1;
int_crc = int_crc & 0x7FFF;
if (int_lsb == 1)
int_crc = int_crc ^ 0x8408;
}
}
int_crc_byte_a = int_crc & 0x00FF;
int_crc_byte_b = (int_crc >> 8) & 0x00FF;
}
Read more (or download project):
http://www.cirvirlab.com/index.php/c-sharp-code-examples/141-c-sharp-crc-computation.html

Categories