How can I convert this C define macro to C#?
#define CMYK(c,m,y,k) ((COLORREF)((((BYTE)(k)|((WORD)((BYTE)(y))<<8))|(((DWORD)(BYTE)(m))<<16))|(((DWORD)(BYTE)(c))<<24)))
I have been searching for a couple of days and have not been able to figure this out. Any help would be appreicated.
C# doesn't support #define macros. Your choices are a conversion function or a COLORREF class with a converting constructor.
public class CMYKConverter
{
public static int ToCMYK(byte c, byte m, byte y, byte k)
{
return k | (y << 8) | (m << 16) | (c << 24);
}
}
public class COLORREF
{
int value;
public COLORREF(byte c, byte m, byte y, byte k)
{
this.value = k | (y << 8) | (m << 16) | (c << 24);
}
}
C# does not support C/C++ like macros. There is no #define equivalent for function like expressions. You'll need to write this as an actual method of an object.
Related
I need some help porting this C# code over to C. I have it working in C# just fine but I'm getting the wrong return in C. Should I be breaking down the bit shifting into separate lines? I thought I had an issue with the data types but I think I have the right ones. Here was the working code that returns 0x03046ABE
UInt32 goHigh(UInt32 x) { return (UInt32)(x & 0xFFFF0000); }
UInt32 goLow(UInt32 x) { return (UInt32)(x & 0xFFFF); }
UInt32 magic(UInt32 pass){
UInt32 key = pass;
UInt16 num = 0x0563;
key = (goLow(key) << 16) | (UInt16)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
return key; //returns 0x03046ABE
}
magic(0x01020304);
This was the incorrect C code that I'm trying to get working
unsigned long goHigh(unsigned long x) {
return (unsigned long )(x & 0xFFFF0000); }
unsigned long goLow(unsigned long x) {
return (unsigned long )(x & 0xFFFF); }
unsigned long magic(unsigned long pass){
unsigned long key = pass;
unsigned int num = 0x0563;
key = (goLow(key) << 16) | (unsigned int)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
return key;
}
magic(0x01020304); //returns 0xb8c6a8e
Most likely problem is here:
key = (goLow(key) << 16) | (unsigned int)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
^^^^^^^^^^^^
which you expect is 16-bit. It may be larger on different machines. Same with unsigned long, which may be 64-bit instead of 32, as you expect.
To be sure, use uint32_t & uint16_t. You have to #include <stdint.h> to be able to use them.
long and int are not the sizes you expect on your platform (32 and 16 bits respectively)
Replace the primitive types with the actual sizes and it will be the same output. I've also removed redundant casts.
These types can be found in stdint.h
#include <stdint.h>
uint32_t goHigh(uint32_t x) {
return (x & 0xFFFF0000);
}
uint32_t goLow(uint32_t x) {
return (x & 0xFFFF);
}
uint32_t magic(uint32_t pass) {
uint32_t key = pass;
uint32_t num = 0x0563;
key = (goLow(key) << 16) | (uint16_t)(((num >> 3) | (num << 13)) ^ (goHigh(key) >> 16));
return key;
}
The following code is used to compact multiple values in a long. The long is used as a key in a C++ unordered_map. It allows me to use the map with a number instead of a complex structure and ifs on each properties. The map searching to be as efficient as possible.
DWORD tmpNo = object->room->details->No;
unsigned char compactNo = tmpNo ;
unsigned __int16 smallX = object->x;
unsigned __int16 smallY = object->y;
unsigned __int64 longCode = 0;
longCode = (item->code[0] << 56) |
(item->code[1] << 48) |
(item->code[2] << 40) |
(compactNo << 32) |
(smallX << 24) |
(smallY << 8);
Am I using the | operator correctly here ?
To recover the values, I tryed :
unsigned char c0 = key >> 56;
unsigned char c1 = key >> 48;
unsigned char c2 = key >> 40;
etc, but it didn't work.
Is it because the original item->code chars are chars and not unsigned chars (the values are always positive though) ?
Also, in an ideal world, the long's values would be recovered in a .NET DLL. Is it possible to do so in C# ?
C# has a byte type for an 8-bit value, but otherwise the logic is similar.
Your | logic looks fine (except you should be shifting smallX by 16 and smallY by 0)
It would help if you gave a complete example.
But assuming that item->code[0] is a char or int (signed or unsigned), you need to convert it to a 64 bit type before shifting, otherwise you end up with undefined behaviour, and the wrong answer.
Something like
((unsigned __int64) item->code[0]) << 56
should work better.
I think that stdint.h is very useful to understand this kind of implementation (sized integers are very meaningful). So here's the code:
#include <stdio.h>
#include <stdint.h>
int8_t getValue8(int index, uint64_t container) {
return (uint8_t)((container >> (index * 8)) & 0XFF);
}
void setValue8(int index, uint64_t* container, uint8_t value) {
// get left part of container including the last byte (cleared by ~0xFF mask) to be used by value
int shift = index * 8;
uint64_t mask = (uint64_t) ~0xFF;
uint64_t left = (*container >> shift) & mask;
left = (left | value) << shift;
// right part of container (complement)
mask = ((uint64_t)1 << ++shift) - 1;
uint64_t right = *container & mask;
// update container
*container = left | right;
}
int main() {
uint64_t* container; // container: can contain 8 chars (64-bit sized container)
uint64_t containerValue = 0;
int n = 8; // n value must be <= 8 considering a 64-bit sized container
uint8_t chars[n]; // eight char values to be stored
// add/set values to container
container = &containerValue;
int i;
for (i = 0; i < n; ++i) {
chars[i] = (uint8_t)((i+1)*10);
setValue8(i, container, chars[i]);
printf("setValue8(%d, container, %d)\n", i, chars[i]);
}
// get values from container
for (i = 0; i < n; ++i) {
printf("getValue8(%d, container)=%d\n", i, getValue8(i, *container));
}
return 0;
}
The code use only bit masks and some bitwise operations, and so you can easily port it to C#. If you have any questions about it just ask. I hope I have been helpful.
In my code I generate a grid of objects with Instantiate but I don't know how to keep reference of each of them by their coordinates (I rely on int coordinates to keep things simple). At first I looked at GameObject[,] but for that I need to know the maximum size my "map" would become and I don't have that information as it is infinite as it's generated as the player moves. Another limitation I found about GameObject[,] is that I can't store negative indexes, so I would not be able to use to store x and y values inside it.
What do you suggest me to use?
Thank you.
How sparse will you be?
If you have a lot of gaps, then something like the following will work well:
public struct GameObjectCoordinate : IEquatable<GameObjectCoordinate>
{
public int X { get; set; }
public int Y { get; set; }
public bool Equals(GameObjectCoordinate other)
{
return X == other.X && Y == other.Y;
}
public override bool Equals(object obj)
{
return obj is GameObjectCoordinate && Equals((GameObjectCoordinate)obj);
}
public override int GetHashCode()
{
/* There's a speed advantage in something simple like
unchecked
{
return (X << 16 | X >> 16) ^ Y;
}
and a distribution advantage in the code here. It can be worth
trying both, but be aware that the code commented out here is better
at small well-spread key values, and that below at large numbers of
values especially if many are similar, so do any testing with real
values your application will deal with.
*/
unchecked
{
ulong c = 0xDEADBEEFDEADBEEF + ((ulong)X << 32) + (ulong)Y;
ulong d = 0xE2ADBEEFDEADBEEF ^ c;
ulong a = d += c = c << 15 | c >> -15;
ulong b = a += d = d << 52 | d >> -52;
c ^= b += a = a << 26 | a >> -26;
d ^= c += b = b << 51 | b >> -51;
a ^= d += c = c << 28 | c >> -28;
b ^= a += d = d << 9 | d >> -9;
c ^= b += a = a << 47 | a >> -47;
d ^= c += b << 54 | b >> -54;
a ^= d += c << 32 | c >> 32;
a += d << 25 | d >> -25;
return (int)(a >> 1);
}
}
Dictionary<GameObjectCoordinate, GameObject> gameObjects = new Dictionary<GameObjectCoordinate, GameObject>();
If you have an object at almost every position, then storing chunks of arrays, and using code similar to the above to store the chunks would likely be better.
If I understand corrctly you have a sparse map of unknown dimensions. In that case use a tree like a k-d tree. All operations are more complex than a direct look up in a table but use way less space. It will also allow negative positions like (-1, -3). See Wikipedia for details
I have a byte that represents two values.
First bit represents represents the sequence number.
The rest of the bits represent the actual content.
In C, I could easily parse this out by the following:
typedef struct
{
byte seqNumber : 1;
byte content : 7;
}
MyPacket;
Then I can easily case the input to MyPacket:
char* inputByte = "U"; // binary 01010101
MyPacket * myPacket = (MyPacket*)inputByte;
Then
myPacket->seqNumber = 1
myPacket->content = 42
How can I do the same thing in C#?
Thank you
kab
I would just use properties. Make getters and setters for the two parts that modify the appropriate bits in the true representation.
class myPacket {
public byte packed = 0;
public int seqNumber {
get { return value >> 7; }
set { packed = value << 7 | packed & ~(1 << 7); }
}
public int content {
get { return value & ~(1 << 7); }
set { packed = packed & (1 << 7) | value & ~(1 << 7); }
}
}
C# likes to keep its types simple, so I am betting this is the closest you are getting. Obviously it does not net you the performance improvement in C, but it salvages the meanings.
In order to utilize a byte to its fullest potential, I'm attempting to store two unique values into a byte: one in the first four bits and another in the second four bits. However, I've found that, while this practice allows for optimized memory allocation, it makes changing the individual values stored in the byte difficult.
In my code, I want to change the first set of four bits in a byte while maintaining the value of the second four bits in the same byte. While bitwise operations allow me to easily retrieve and manipulate the first four bit values, I'm finding it difficult to concatenate this new value with the second set of four bits in a byte. The question is, how can I erase the first four bits from a byte (or, more accurately, set them all the zero) and add the new set of 4 bits to replace the four bits that were just erased, thus preserving the last 4 bits in a byte while changing the first four?
Here's an example:
// Changes the first four bits in a byte to the parameter value
public void changeFirstFourBits(byte newFirstFour)
{
// If 'newFirstFour' is 0101 in binary, make 'value' 01011111 in binary, changing
// the first four bits but leaving the second four alone.
}
private byte value = 255; // binary: 11111111
Use bitwise AND (&) to clear out the old bits, shift the new bits to the correct position and bitwise OR (|) them together:
value = (value & 0xF) | (newFirstFour << 4);
Here's what happens:
value : abcdefgh
newFirstFour : 0000xyzw
0xF : 00001111
value & 0xF : 0000efgh
newFirstFour << 4 : xyzw0000
(value & 0xF) | (newFirstFour << 4) : xyzwefgh
When I have to do bit-twiddling like this, I make a readonly struct to do it for me. A four-bit integer is called nybble, of course:
struct TwoNybbles
{
private readonly byte b;
public byte High { get { return (byte)(b >> 4); } }
public byte Low { get { return (byte)(b & 0x0F); } {
public TwoNybbles(byte high, byte low)
{
this.b = (byte)((high << 4) | (low & 0x0F));
}
And then add implicit conversions between TwoNybbles and byte. Now you can just treat any byte as having a High and Low byte without putting all that ugly bit twiddling in your mainline code.
You first mask out you the high four bytes using value & 0xF. Then you shift the new bits to the high four bits using newFirstFour << 4 and finally you combine them together using binary or.
public void changeHighFourBits(byte newHighFour)
{
value=(byte)( (value & 0x0F) | (newFirstFour << 4));
}
public void changeLowFourBits(byte newLowFour)
{
value=(byte)( (value & 0xF0) | newLowFour);
}
I'm not really sure what your method there is supposed to do, but here are some methods for you:
void setHigh(ref byte b, byte val) {
b = (b & 0xf) | (val << 4);
}
byte high(byte b) {
return (b & 0xf0) >> 4;
}
void setLow(ref byte b, byte val) {
b = (b & 0xf0) | val;
}
byte low(byte b) {
return b & 0xf;
}
Should be self-explanatory.
public int SplatBit(int Reg, int Val, int ValLen, int Pos)
{
int mask = ((1 << ValLen) - 1) << Pos;
int newv = Val << Pos;
int res = (Reg & ~mask) | newv;
return res;
}
Example:
Reg = 135
Val = 9 (ValLen = 4, because 9 = 1001)
Pos = 2
135 = 10000111
9 = 1001
9 << Pos = 100100
Result = 10100111
A quick look would indicate that a bitwise and can be achieved using the & operator. So to remove the first four bytes you should be able to do:
byte value1=255; //11111111
byte value2=15; //00001111
return value1&value2;
Assuming newVal contains the value you want to store in origVal.
Do this for the 4 least significant bits:
byte origVal = ???;
byte newVal = ???
orig = (origVal & 0xF0) + newVal;
and this for the 4 most significant bits:
byte origVal = ???;
byte newVal = ???
orig = (origVal & 0xF) + (newVal << 4);
I know you asked specifically about clearing out the first four bits, which has been answered several times, but I wanted to point out that if you have two values <= decimal 15, you can combine them into 8 bits simply with this:
public int setBits(int upperFour, int lowerFour)
{
return upperFour << 4 | lowerFour;
}
The result will be xxxxyyyy where
xxxx = upperFour
yyyy = lowerFour
And that is what you seem to be trying to do.
Here's some code, but I think the earlier answers will do it for you. This is just to show some sort of test code to copy and past into a simple console project (the WriteBits method by be of help):
static void Main(string[] args)
{
int b1 = 255;
WriteBits(b1);
int b2 = b1 >> 4;
WriteBits(b2);
int b3 = b1 & ~0xF ;
WriteBits(b3);
// Store 5 in first nibble
int b4 = 5 << 4;
WriteBits(b4);
// Store 8 in second nibble
int b5 = 8;
WriteBits(b5);
// Store 5 and 8 in first and second nibbles
int b6 = 0;
b6 |= (5 << 4) + 8;
WriteBits(b6);
// Store 2 and 4
int b7 = 0;
b7 = StoreFirstNibble(2, b7);
b7 = StoreSecondNibble(4, b7);
WriteBits(b7);
// Read First Nibble
int first = ReadFirstNibble(b7);
WriteBits(first);
// Read Second Nibble
int second = ReadSecondNibble(b7);
WriteBits(second);
}
static int ReadFirstNibble(int storage)
{
return storage >> 4;
}
static int ReadSecondNibble(int storage)
{
return storage &= 0xF;
}
static int StoreFirstNibble(int val, int storage)
{
return storage |= (val << 4);
}
static int StoreSecondNibble(int val, int storage)
{
return storage |= val;
}
static void WriteBits(int b)
{
Console.WriteLine(BitConverter.ToString(BitConverter.GetBytes(b),0));
}
}