System.OverflowException computing CRC-16 on C# - c#

I'm trying to use the code provided in this answer but I get a System.OverflowException on the line byte index = (byte)(crc ^ bytes[i]); This happens on the next iteration after the first non-zero byte. I'm not sure what to check.
Thanks in advance.
SharpDevelop Version : 5.1.0.5134-RC-d5052dc5
.NET Version : 4.6.00079
OS Version : Microsoft Windows NT 6.3.9600.0

It may be that you building with arithmetic overflow checking enabled, but the answer assumes that this is not the case. By default, checking is disabled, so it's not uncommon to see this assumption made.
In the code in question:
public static ushort ComputeChecksum(byte[] bytes)
{
ushort crc = 0;
for (int i = 0; i < bytes.Length; ++i)
{
byte index = (byte)(crc ^ bytes[i]);
crc = (ushort)((crc >> 8) ^ table[index]);
}
return crc;
}
crc is an unsigned short while index is a byte, so (crc ^ bytes[i]) could clearly be larger than 255 and make the conversion to byte overflow in a checked environment.
If I change the line to be explicitly unchecked:
byte index = unchecked((byte)(crc ^ bytes[i]));
Then the overflow no longer occurs.

Related

Pascal to C# conversion

I am trying to convert this pascal code into C# in order to communicate with a peripheral device attached to a comm port. This piece of code should calculate the Control Byte, however I'm not getting the right hex Value therefore I'm wondering if I'm converting the code in the right way.
Pascal:
begin
check := 255;
for i:= 3 to length(sequence)-4 do
check := check xor byte(sequence[i]);
end;
C#:
int check = 255;
for (int x = 3; x < (sequence.Length - 4); x++)
{
check = check ^ (byte)(sequence[x]);
}
Pascal function:
{ *** conversion of number into string ‘hex’ *** }
function word_to_hex (w: word) : string;
var
i : integer;
s : string;
b : byte;
c : char;
begin
s := ‘’;
for i:= 0 to 3 do
begin
b := (hi(w) shr 4) and 15;
case b of
0..9 : c := char(b+$30);
10..15 : c := char(b+$41-10);
end;
s := s + c;
w := w shl 4;
end;
word_ to_hex := s;
end;
C# Equivalent:
public string ControlByte(string check)
{
string s = "";
byte b;
char c = '\0';
//shift = check >> 4 & 15;
for (int x = 0; x <= 3; x++)
{
b = (byte)((Convert.ToInt32(check) >> 4) & 15);
if (b >= 0 && b <= 9)
{
c = (char)(b + 0x30);
}
else if (b >= 10 && b <= 15)
{
c = (char)(b + 0x41 - 10);
}
s = s + c;
check = (Convert.ToInt32(check) << 4).ToString();
}
return s;
}
And last pascal:
function byte_to_hex (b:byte) : string;
begin
byte_to_hex := copy(word_to_hex(word(b)),3,2);
end;
which i am not sure how is substringing the result from the function. So please let me know if there is something wrong with the code conversion and whether I need to convert the function result into bytes. I appreciate your help, UF.
Further info EDIT: Initially I send a string sequence containing the command and information that printer is supposed to print. Since every sequence has a unique Control Byte (in Hex) I have to calculate this from the sequence (sequence = "P1;1$l201PrinterPrinterPrinter1B/100.00/100.00/0/\") which is what upper code does according to POSNET=>"cc – control byte, encoded as 2 HEX digits (EXOR of all characters after ESC P to this byte with #255 initial quantity), according to the following algorithm in PASCAL language:(see first code block)".=>1. check number calculated in the above loop which constitutes control byte should be recoded into two HEX characters (ASCII characters from scope: ‘0’..’9’,’A’..’F’,’a’..’f’), utilizing the following byte_to_hex function:(see third code block). =>{* conversion of byte into 2 characters *}(see 5th code block)
The most obvious problem that I can see is that the Pascal code operates on 1-based 8 bit encoded strings, and the C# code operates on 0-based 16 bit encoded strings. To convert the Pascal/Delphi code that you use to C# you need to address the mis-match. Perhaps like this:
byte[] bytes = Encoding.Default.GetBytes(sequence);
int check = 255;
for (int i = 2; i < bytes.Length-4; i++)
{
check ^= bytes[i];
}
Now, in order to write this I've had to make quite a few assumptions, because you did not include anywhere near enough code in the question. Here's what I assumed:
The Pascal sequence variable is a 1-based 8 bit ANSI encoded Delphi AnsiString.
The Pascal check variable is a Delphi 32 bit signed Integer.
The C# sequence variable is a C# string.
If any of those assumptions prove to be false, then the code above will be no good. For instance, perhaps the Pascal check is really Byte. In which case I guess the C# code should be:
byte[] bytes = Encoding.Default.GetBytes(sequence);
byte check = 255;
for (int i = 2; i < bytes.Length - 4; i++)
{
check ^= bytes[i];
}
I hope that this persuades you of the importance of supplying complete information.
That's really all the meat of this question. The rest of the code concerns converting values to hex strings in C# code. That has been covered again and again here on Stack Overflow. For instance:
C# convert integer to hex and back again
How do you convert Byte Array to Hexadecimal String, and vice versa?
There are many many more such questions.

8 byte array back to long (C# to c++)

I'm converting a long to a 8 slot byte array with C#
Byte[] Data = BitConverter.GetBytes(data.LongLength);
For example if data.LongLenght is 172085, I get the following array { 53,160,2,0,0,0,0,0 }
But then after I send this to my c++ server I would like to get it to a long again.
I tryed this but without success...
long fileLenght = 0;
for( int i=0;i < 8; ++i)
fileLenght = (fileLenght << 8) + Data[i];
Whenever you send data across a network you have to mind endianness
In your case, it looks like the proper way to recreate the long from the byte array would be to reconstruct it from right to left:
long fileLength = 0;
for( int i=7; i >= 0; i--)
fileLength = (fileLength << 8) + Data[i];
But this will not always be the case. Depending on the hardware and operating system at the end points, and the network transfer protocols you use you may have data coming in big-endian or little-endian format, and the receiving end may be little-endian or big-endian.
From the documentation:
The order of bytes in the array returned by the GetBytes method depends on whether the computer architecture is little-endian or big-endian.
It looks like on your hardware the array is sent with its least significant bytes first. Therefore, you should start your loop from the end of the array:
int64_t fileLenth = 0;
for( int i=7;i >= 0; --i)
fileLenght = (fileLenght << 8) + Data[i];
Demo. (prints 172085)
In order to achieve better compatibility with C# you should use a system-independent 64-bit integral type instead of long, i.e. int64_t.
If both ends have the same byte order (C# is always little-endian, probably your C++ is also):
long long fileLength;
memcpy(&fileLength, Data, 8);
The optimizing compiler will almost certainly turn that into a single 64-bit move, so don't worry that it looks like an expensive function call.

How do I properly loop through and print bits of an Int, Long, Float, or BigInteger?

I'm trying to debug some bit shifting operations and I need to visualize the bits as they exist before and after a Bit-Shifting operation.
I read from this answer that I may need to handle backfill from the shifting, but I'm not sure what that means.
I think that by asking this question (how do I print the bits in a int) I can figure out what the backfill is, and perhaps some other questions I have.
Here is my sample code so far.
static string GetBits(int num)
{
StringBuilder sb = new StringBuilder();
uint bits = (uint)num;
while (bits!=0)
{
bits >>= 1;
isBitSet = // somehow do an | operation on the first bit.
// I'm unsure if it's possible to handle different data types here
// or if unsafe code and a PTR is needed
if (isBitSet)
sb.Append("1");
else
sb.Append("0");
}
}
Convert.ToString(56,2).PadLeft(8,'0') returns "00111000"
This is for a byte, works for int also, just increase the numbers
To test if the last bit is set you could use:
isBitSet = ((bits & 1) == 1);
But you should do so before shifting right (not after), otherwise you's missing the first bit:
isBitSet = ((bits & 1) == 1);
bits = bits >> 1;
But a better option would be to use the static methods of the BitConverter class to get the actual bytes used to represent the number in memory into a byte array. The advantage (or disadvantage depending on your needs) of this method is that this reflects the endianness of the machine running the code.
byte[] bytes = BitConverter.GetBytes(num);
int bitPos = 0;
while(bitPos < 8 * bytes.Length)
{
int byteIndex = bitPos / 8;
int offset = bitPos % 8;
bool isSet = (bytes[byteIndex] & (1 << offset)) != 0;
// isSet = [True] if the bit at bitPos is set, false otherwise
bitPos++;
}

are bytes handled in the same way in c# and java?

I'm moving a java android app to windows metro, this app has a heavy use of blobs and decoding (the blobs are coded to take less space on the DB
After copying the entire decoding code, the result is slightly different.
There are some parts where the asks if the byte value is lower than 0, as I understand, bytes on c# are alaways unsigned so i don't understand why the result is not the same as the android app.
Here is a snippet.
for (int i = 0; i < length; i++) {
s[six] = (byte) (blob[i] ^ pronpassword[ix]); //pronpass is a string password
if (s[six] == 0) {
s[six + 1] = (byte)'-';
s[six] ^= 128;
s[six] = (byte) PRON_MAP[(byte) s[six]];
six++;
} else {
s[six] = (byte) PRON_MAP[(byte) s[six]];
}
six++;
ix++;
if (ix == plen)
ix = 0;
}
Thanks!
In Java, byte is signed. There is actually no such thing as an unsigned byte in Java. It's equivalent to C#'s sbyte, so that's the type you should port it to.

How does BitConverter.ToInt32 work?

Here is a method -
using System;
class Program
{
static void Main(string[] args)
{
//
// Create an array of four bytes.
// ... Then convert it into an integer and unsigned integer.
//
byte[] array = new byte[4];
array[0] = 1; // Lowest
array[1] = 64;
array[2] = 0;
array[3] = 0; // Sign bit
//
// Use BitConverter to convert the bytes to an int and a uint.
// ... The int and uint can have different values if the sign bit differs.
//
int result1 = BitConverter.ToInt32(array, 0); // Start at first index
uint result2 = BitConverter.ToUInt32(array, 0); // First index
Console.WriteLine(result1);
Console.WriteLine(result2);
Console.ReadLine();
}
}
Output
16385
16385
I just want to know how this is happening?
The docs for BitConverter.ToInt32 actually have some pretty good examples. Assuming BitConverter.IsLittleEndian returns true, array[0] is the least significant byte, as you've shown... although array[3] isn't just the sign bit, it's the most significant byte which includes the sign bit (as bit 7) but the rest of the bits are for magnitude.
So in your case, the least significant byte is 1, and the next byte is 64 - so the result is:
( 1 * (1 << 0) ) + // Bottom 8 bits
(64 * (1 << 8) ) + // Next 8 bits, i.e. multiply by 256
( 0 * (1 << 16)) + // Next 8 bits, i.e. multiply by 65,536
( 0 * (1 << 24)) // Top 7 bits and sign bit, multiply by 16,777,216
which is 16385. If the sign bit were set, you'd need to consider the two cases differently, but in this case it's simple.
It converts like it was a number in base 256. So in your case : 1+64*256 = 16385
Looking at the .Net 4.0 Framework reference source, BitConverter does work how Jon's answer said, though it uses pointers (unsafe code) to work with the array.
However, if the second argument (i.e., startindex) is divisible by 4 (as is the case in your example), the framework takes a shortcut. It takes a byte pointer to the value[startindex], casts it to an int pointer, then dereferences it. This trick works regardless of whether IsLittleEndian is true.
From a high level, this basically just means the code is pointing at 4 bytes in the byte array and categorically declaring, "the chunk of memory over there is an int!" (and then returning a copy of it). This makes perfect sense when you take into account that under the hood, an int is just a chunk of memory.
Below is the source code of the framework ToUint32 method:
return (uint)ToInt32(value, startIndex);
array[0] = 1; // Lowest // 0x01 array[1] = 64; //
0x40 array[2] = 0; // 0x00 array[3] = 0; // Sign bit
0x00
If you combine each hex value 0x00004001
The MSDN documentatin explains everything
You can look for yourself - https://referencesource.microsoft.com/#mscorlib/system/bitconverter.cs,e8230d40857425ba
If the data is word-aligned, it will simply cast the memory pointer to an int32.
return *((int *) pbyte);
Otherwise, it uses bitwise logic from the byte memory pointer values.
For those of you who are having trouble with Little Endien and Big Endien. I use the following wrapper functions to take care of it.
public static Int16 ToInt16(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt16(BitConverter.IsLittleEndian ? data.Skip(offset).Take(2).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt16(data, offset);
}
public static Int32 ToInt32(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt32(BitConverter.IsLittleEndian ? data.Skip(offset).Take(4).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt32(data, offset);
}
public static Int64 ToInt64(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt64(BitConverter.IsLittleEndian ? data.Skip(offset).Take(8).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt64(data, offset);
}

Categories