Porting Bitwise Operations from C# To C - c#

I need some help porting this C# code over to C. I have it working in C# just fine but I'm getting the wrong return in C. Should I be breaking down the bit shifting into separate lines? I thought I had an issue with the data types but I think I have the right ones. Here was the working code that returns 0x03046ABE
UInt32 goHigh(UInt32 x) { return (UInt32)(x & 0xFFFF0000); }
UInt32 goLow(UInt32 x) { return (UInt32)(x & 0xFFFF); }
UInt32 magic(UInt32 pass){
UInt32 key = pass;
UInt16 num = 0x0563;
key = (goLow(key) << 16) | (UInt16)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
return key; //returns 0x03046ABE
}
magic(0x01020304);
This was the incorrect C code that I'm trying to get working
unsigned long goHigh(unsigned long x) {
return (unsigned long )(x & 0xFFFF0000); }
unsigned long goLow(unsigned long x) {
return (unsigned long )(x & 0xFFFF); }
unsigned long magic(unsigned long pass){
unsigned long key = pass;
unsigned int num = 0x0563;
key = (goLow(key) << 16) | (unsigned int)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
return key;
}
magic(0x01020304); //returns 0xb8c6a8e

Most likely problem is here:
key = (goLow(key) << 16) | (unsigned int)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
^^^^^^^^^^^^
which you expect is 16-bit. It may be larger on different machines. Same with unsigned long, which may be 64-bit instead of 32, as you expect.
To be sure, use uint32_t & uint16_t. You have to #include <stdint.h> to be able to use them.

long and int are not the sizes you expect on your platform (32 and 16 bits respectively)
Replace the primitive types with the actual sizes and it will be the same output. I've also removed redundant casts.
These types can be found in stdint.h
#include <stdint.h>
uint32_t goHigh(uint32_t x) {
return (x & 0xFFFF0000);
}
uint32_t goLow(uint32_t x) {
return (x & 0xFFFF);
}
uint32_t magic(uint32_t pass) {
uint32_t key = pass;
uint32_t num = 0x0563;
key = (goLow(key) << 16) | (uint16_t)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
return key;
}

Related

Store multiple chars in a long and recover them

The following code is used to compact multiple values in a long. The long is used as a key in a C++ unordered_map. It allows me to use the map with a number instead of a complex structure and ifs on each properties. The map searching to be as efficient as possible.
DWORD tmpNo = object->room->details->No;
unsigned char compactNo = tmpNo ;
unsigned __int16 smallX = object->x;
unsigned __int16 smallY = object->y;
unsigned __int64 longCode = 0;
longCode = (item->code[0] << 56) |
(item->code[1] << 48) |
(item->code[2] << 40) |
(compactNo << 32) |
(smallX << 24) |
(smallY << 8);
Am I using the | operator correctly here ?
To recover the values, I tryed :
unsigned char c0 = key >> 56;
unsigned char c1 = key >> 48;
unsigned char c2 = key >> 40;
etc, but it didn't work.
Is it because the original item->code chars are chars and not unsigned chars (the values are always positive though) ?
Also, in an ideal world, the long's values would be recovered in a .NET DLL. Is it possible to do so in C# ?
C# has a byte type for an 8-bit value, but otherwise the logic is similar.
Your | logic looks fine (except you should be shifting smallX by 16 and smallY by 0)
It would help if you gave a complete example.
But assuming that item->code[0] is a char or int (signed or unsigned), you need to convert it to a 64 bit type before shifting, otherwise you end up with undefined behaviour, and the wrong answer.
Something like
((unsigned __int64) item->code[0]) << 56
should work better.
I think that stdint.h is very useful to understand this kind of implementation (sized integers are very meaningful). So here's the code:
#include <stdio.h>
#include <stdint.h>
int8_t getValue8(int index, uint64_t container) {
return (uint8_t)((container >> (index * 8)) & 0XFF);
}
void setValue8(int index, uint64_t* container, uint8_t value) {
// get left part of container including the last byte (cleared by ~0xFF mask) to be used by value
int shift = index * 8;
uint64_t mask = (uint64_t) ~0xFF;
uint64_t left = (*container >> shift) & mask;
left = (left | value) << shift;
// right part of container (complement)
mask = ((uint64_t)1 << ++shift) - 1;
uint64_t right = *container & mask;
// update container
*container = left | right;
}
int main() {
uint64_t* container; // container: can contain 8 chars (64-bit sized container)
uint64_t containerValue = 0;
int n = 8; // n value must be <= 8 considering a 64-bit sized container
uint8_t chars[n]; // eight char values to be stored
// add/set values to container
container = &containerValue;
int i;
for (i = 0; i < n; ++i) {
chars[i] = (uint8_t)((i+1)*10);
setValue8(i, container, chars[i]);
printf("setValue8(%d, container, %d)\n", i, chars[i]);
}
// get values from container
for (i = 0; i < n; ++i) {
printf("getValue8(%d, container)=%d\n", i, getValue8(i, *container));
}
return 0;
}
The code use only bit masks and some bitwise operations, and so you can easily port it to C#. If you have any questions about it just ask. I hope I have been helpful.

Converting Ada code to C#

So I need help converting this ada code int c#, it's basically a checksum algorithm.
ADA:
CHECKSUM_VALUE := ((ROTATE_LEFT_1_BIT(CHECKSUM_VALUE)) xor (CURRENT_VALUE));
This is what I could come up with:
C#:
checksum = RotateLeft(checksum, rotateCount, sizeof(ushort) * 8) ^ word;
RotateLeft Function:
public static int RotateLeft(int value, ushort rotateCount, int dataSize)
{
return (value << rotateCount) | (value >> (dataSize - rotateCount));
}
However when comparing the checksum results from the ada and C# algorithms, they do not match so I think my conversion isn't correct, anyone who has used ada before can give some input would be really helpful.
Thanks
The issue seems to be with the C# and perhaps not with your interpretation of the ADA code. If you are truly rotating a 16 bit unsigned number as your post is implying, then you will need to mask the upper 2 bytes of the resulting integer value so that they do not contribute to the answer. Casting an uint x to ushort in C# will do the equivalent of x & 0x0000FFFF
public static ushort RotateLeft(ushort value, int count)
{
int left = value << count;
int right = value >> (16 - count);
return (ushort)(left | right);
}
This answer is in C, since I don’t have a C# compiler.
You have value as an int, which is signed, so that a right shift extends the sign bit into the vacated space; so in (value << rotateCount) | (value >> (dataSize - rotateCount)), the right-hand half ((value >> (dataSize - rotateCount))) needs to have the top bits masked off. And I don’t know why you need dataSize, isn’t it sizeof(value)?
I think a better solution would be to use unsigned, so that a right shift introduces zeros into the vacated space.
#include <stdio.h>
unsigned rotateLeft(unsigned value, int by) {
const unsigned bits = sizeof(value) * 8;
return (value << by) | (value >> (bits - by));
}
int main() {
unsigned input = 0x52525252;
unsigned result = input;
printf("input: %x\n", input);
{
int j;
for (j = 0; j < 8; j++) {
result = rotateLeft(result, 1);
printf("result: %x\n", result);
}
}
return 0;
}
The output is
input: 52525252
result: a4a4a4a4
result: 49494949
result: 92929292
result: 25252525
result: 4a4a4a4a
result: 94949494
result: 29292929
result: 52525252

Converting small C# checksum program into Java

I'm trying to build a simple ground control station for an RC airplane. I've almost finished it, but I'm having a LOT of trouble with the checksum calculation. I understand that the data types of Java and C# are different. I've attempted to account for that but I'm not sure I've succeeded. The program utilizes the CRC-16-CCITT method.
Here is my port:
public int crc_accumulate(int b, int crc) {
int ch = (b ^ (crc & 0x00ff));
ch = (ch ^ (ch << 4));
return ((crc >> 8) ^ (ch << 8) ^ (ch << 3) ^ (ch >> 4));
}
public byte[] crc_calculate() {
int[] pBuffer=new int[]{255,9,19,1,1,0,0,0,0,0,2,3,81,4,3};
int crcEx=0;
int clength=pBuffer.length;
int[] X25_INIT_CRC=new int[]{255,255};
byte[] crcTmp=new byte[]{(byte)255,(byte)255};
int crcTmp2 = ((crcTmp[0] & 0xff) << 8) | (crcTmp[1] & 0xff);
crcTmp[0]=(byte)crcTmp2;
crcTmp[1]=(byte)(crcTmp2 >> 8);
System.out.println("pre-calculation: 0x"+Integer.toHexString((crcTmp[0]&0xff))+" 0x"+Integer.toHexString((crcTmp[1]&0xff))+"; ushort: "+crcTmp2);
if (clength < 1) {
System.out.println("clength < 1");
return crcTmp;
}
for (int i=1; i<clength; i++) {
crcTmp2 = crc_accumulate(pBuffer[i], crcTmp2);
}
crcTmp[0]=(byte)crcTmp2;
crcTmp[1]=(byte)(crcTmp2 >> 8);
System.out.print("crc calculation: 0x"+Integer.toHexString((crcTmp[0]&0xff))+" 0x"+Integer.toHexString((crcTmp[1]&0xff))+"; ushort: "+crcTmp2);
if (crcEx!=-1) {
System.out.println(" extraCRC["+crcEx+"]="+extraCRC[crcEx]);
crcTmp2=crc_accumulate(extraCRC[crcEx], crcTmp2);
crcTmp[0]=(byte)crcTmp2;
crcTmp[1]=(byte)(crcTmp2 >> 8);
System.out.println("with extra CRC: 0x"+Integer.toHexString((crcTmp[0]&0xff))+" 0x"+Integer.toHexString((crcTmp[1]&0xff))+"; ushort: "+crcTmp2+"\n\n");
}
return crcTmp;
}
This is the original C# file:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ArdupilotMega
{
class MavlinkCRC
{
const int X25_INIT_CRC = 0xffff;
const int X25_VALIDATE_CRC = 0xf0b8;
public static ushort crc_accumulate(byte b, ushort crc)
{
unchecked
{
byte ch = (byte)(b ^ (byte)(crc & 0x00ff));
ch = (byte)(ch ^ (ch << 4));
return (ushort)((crc >> 8) ^ (ch << 8) ^ (ch << 3) ^ (ch >> 4));
}
}
public static ushort crc_calculate(byte[] pBuffer, int length)
{
if (length < 1)
{
return 0xffff;
}
// For a "message" of length bytes contained in the unsigned char array
// pointed to by pBuffer, calculate the CRC
// crcCalculate(unsigned char* pBuffer, int length, unsigned short* checkConst) < not needed
ushort crcTmp;
int i;
crcTmp = X25_INIT_CRC;
for (i = 1; i < length; i++) // skips header U
{
crcTmp = crc_accumulate(pBuffer[i], crcTmp);
//Console.WriteLine(crcTmp + " " + pBuffer[i] + " " + length);
}
return (crcTmp);
}
}
}
I'm quite sure that the problem in my port lies between lines 1 and 5. I expect to get an output of 0x94 0x88, but instead the program outputs 0x2D 0xF4.
I would greatly appreciate it if someone could show me where I've gone wrong.
Thanks for any help,
Cameron
Alright, for starters lets clean up the C# code a little:
const int X25_INIT_CRC = 0xffff;
public static ushort crc_accumulate(byte b, ushort crc)
{
unchecked
{
byte ch = (byte)(b ^ (byte)(crc & 0x00ff));
ch = (byte)(ch ^ (ch << 4));
return (ushort)((crc >> 8) ^ (ch << 8) ^ (ch << 3) ^ (ch >> 4));
}
}
public static ushort crc_calculate(byte[] pBuffer)
{
ushort crcTmp = X25_INIT_CRC;
for (int i = 1; i < pBuffer.Length; i++) // skips header U
crcTmp = crc_accumulate(pBuffer[i], crcTmp);
return crcTmp;
}
Now the biggest problem here is that there are no unsigned numeric types in Java, so you have to work around that by using the next bigger numeric type instead of ushort and byte and masking off the high bits as needed. You can also just drop the unchecked because Java has no overflow checking anyway. The end result is something like this:
public static final int X25_INIT_CRC = 0xffff;
public static int crc_accumulate(short b, int crc) {
short ch = (short)((b ^ crc) & 0xff);
ch = (short)((ch ^ (ch << 4)) & 0xff);
return ((crc >> 8) ^ (ch << 8) ^ (ch << 3) ^ (ch >> 4)) & 0xffff;
}
public static int crc_calculate(short[] pBuffer) {
int crcTmp = X25_INIT_CRC;
for (int i = 1; i < pBuffer.length; i++) // skips header U
crcTmp = crc_accumulate(pBuffer[i], crcTmp);
return crcTmp;
}
For the input in your question ({ 255, 9, 19, 1, 1, 0, 0, 0, 0, 0, 2, 3, 81, 4, 3 }) the original C#, cleaned up C# and Java all produce 0xfc7e.

using bitarray to grab bits and build new value

If i take a uint value = 2921803 (0x2C954B), which is really a 4 byte package (4B 95 2C 00)
and i want to get the 16 least significant bits of the byte version of it using bitarray, how would i go about it?
This is how i am trying to do it:
byte[] bytes = BitConverter.GetBytes(value); //4B 95 2C 00 - bytes are moved around
BitArray bitArray = new BitArray(bytes); //entry [0] shows value for 1101 0010 (bits are reversed)
At this point, i am all turned around. I did try this:
byte[] bytes = BitConverter.GetBytes(value);
Array.Reverse(bytes);
BitArray bitArray = new BitArray(bytes);
Which gave me all the bits but completely reversed, reading from [31] to [0].
ultimately, i'm expecting/hoping to get 19349 (4B 95) as my answer.
This is how i was hoping to implement the function:
private uint GetValue(uint value, int bitsToGrab, int bitsToMoveOver)
{
byte[] bytes = BitConverter.GetBytes(value);
BitArray bitArray = new BitArray(bytes);
uint outputMask = (uint)(1 << (bitsToGrab - 1));
//now that i have all the bits, i can offset, and grab the ones i want
for (int i = bitsToMoveOver; i < bitsToGrab; i++)
{
if ((Convert.ToByte(bitArray[i]) & 1) > 0)
{
outputVal |= outputMask;
}
outputMask >>= 1;
}
}
The 16 least significant bits of 0x2C954B are 0x954B. You can get that as follows:
int value = 0x2C954B;
int result = value & 0xFFFF;
// result == 0x954B
If you want 0x4B95 then you can get that as follows:
int result = ((value & 0xFF) << 8) | ((value >> 8) & 0xFF);
// result == 0x4B95
Try this:
uint value = 0x002C954Bu;
int reversed = Reverse((int)value);
// reversed == 0x4B952C00;
int result = Extract(reversed, 16, 16);
// result == 0x4B95
with
int Extract(int value, int offset, int length)
{
return (value >> offset) & ((1 << length) - 1);
}
int Reverse(int value)
{
return ((value >> 24) & 0xFF) | ((value >> 8) & 0xFF00) |
((value & 0xFF00) << 8) | ((value & 0xFF) << 24);
}
unit - 32bits
Basically you should set 16 most significant bits to zero, so use bitwise AND operator:
uint newValue = 0x0000FFFF & uintValue;

Convert C Define Macro to C#

How can I convert this C define macro to C#?
#define CMYK(c,m,y,k) ((COLORREF)((((BYTE)(k)|((WORD)((BYTE)(y))<<8))|(((DWORD)(BYTE)(m))<<16))|(((DWORD)(BYTE)(c))<<24)))
I have been searching for a couple of days and have not been able to figure this out. Any help would be appreicated.
C# doesn't support #define macros. Your choices are a conversion function or a COLORREF class with a converting constructor.
public class CMYKConverter
{
public static int ToCMYK(byte c, byte m, byte y, byte k)
{
return k | (y << 8) | (m << 16) | (c << 24);
}
}
public class COLORREF
{
int value;
public COLORREF(byte c, byte m, byte y, byte k)
{
this.value = k | (y << 8) | (m << 16) | (c << 24);
}
}
C# does not support C/C++ like macros. There is no #define equivalent for function like expressions. You'll need to write this as an actual method of an object.

Categories