How would you go about converting the following C #define into c#.
#define get16bits(d) (*((const uint16_t *) (d)))
#if !defined (get16bits)
#define get16bits(d) ((((uint32_t)(((const uint8_t *)(d))[1])) << 8)\
+(uint32_t)(((const uint8_t *)(d))[0]) )
#endif
I know you probably replace the uint32_t which with UInt32 change the other types to c# equivalent, but how proceed from making the above a static method. Would that be the best way of going about it.
Bob.
I do not know why you are checking to see if get16bits is defined immediately after you define it, as the only way it would not be is a preprocessor error which would stop your compile.
Now, that said, here's how you translate that godawful macro to C#:
aNumber & 0xFFFF;
In fact, here's how you translate that macro to C:
a_number & 0xFFFF;
You don't need all this casting wizardry just to get the lower 16 bits of a number. Here's more C defines to show you what I'm talking about:
#define getbyte(d) (d & 0xFF)
#define getword(d) (d & 0xFFFF)
#define getdword(d) (d & 0xFFFFFFFF)
#define gethighword(d) ((d & 0xFFFF0000) >> 16)
#define gethighbyte(d) ((d & 0xFF00) >> 8)
Have you really used that macro in production code?
Getting the lower 16 bits of an integer is quite easy in C#:
int x = 0x12345678;
short y = (short)x; // gets 0x5678
If you want a static method for doing it, it's just as simple:
public static short Get16Bits(int value) {
return (short)value;
}
Related
Is there a functional equivalent for the C# function .tostring("X4"), to C++?
I've been scratching my head for a few days wondering why my sensor is reporting a different serial number to what the manufacturer software (written in C#) and what my C++ code reports. The serial number is also written on the sensor, which ties in with what the manufacturer C# code reports. On inspection of their source code, they're using the .tostring("X4") function to convert it to "human readable", which makes sense (from a "oh thats why it's different", not a "why on earth would you do that" point of view).
For further info - https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-numeric-format-strings
A similar question but C# to Java - C# .ToString("X4") equivalent in Java
There's is no readily available equivalent function in C++, but you could create one that works in a similar way:
#include <iostream>
#include <sstream>
#include <string>
template<class T>
std::string tostringX(unsigned len, T v) {
std::ostringstream os;
os << std::hex << v;
auto rv = os.str();
if(rv.size() < len) rv = std::string(len - rv.size(), '0') + rv;
return rv;
}
int main() {
std::cout << tostringX(4, 0xFEDC) << '\n'; // outputs "fedc"
}
#include <iostream>
#include<sstream>
#include <iomanip>
int main() {
int x = 12;
std::stringstream stream;
stream << std::setfill('0') << std::setw(4)<< std::hex << x;
std::cout << stream.str();
}
Output : 000c
I guess that's the answer you're looking for.
After searching, I heard that UInt32 was the C# equivalent of C++ DWORD.
I tested results by performing the arithmetic
*(DWORD*)(1 + 0x2C) //C++
(UInt32)(1 + 0x2C) //C#
They produce completely different results. Can someone please tell me the correct match for DWORD in C#?
Your example is using the DWORD as a pointer, which is most likely an invalid pointer. I'm assuming you meant DWORD by itself.
DWORD is defined as unsigned long, which ends up being a 32-bit unsigned integer.
uint (System.UInt32) should be a match.
#import <stdio.h>
// I'm on macOS right now, so I'm defining DWORD
// the way that Win32 defines it.
typedef unsigned long DWORD;
int main() {
DWORD d = (DWORD)(1 + 0x2C);
int i = (int)d;
printf("value: %d\n", i);
return 0;
}
Output: 45
public class Program
{
public static void Main()
{
uint d = (uint)(1 + 0x2C);
System.Console.WriteLine("Value: {0}", d);
}
}
Output: 45
DWord definition from microsoft:
typedef unsigned long DWORD, *PDWORD, *LPDWORD;
https://msdn.microsoft.com/en-us/library/cc230318.aspx
Uint32 definition from microsoft
typedef unsigned int UINT32;
https://msdn.microsoft.com/en-us/library/cc230386.aspx
now you can see the difference.... one is unsigned long and the other is unsigned int
Your two snippets do completely different things. In your C++ code, you are, for some strange reason, converting the value (1 + 0x2C) (a strange way to write 45) to a DWORD*, and then dereferencing it, as if that address is actually a valid memory location. With the C#, you are simply converting between integer types.
I am trying to invoke a simple C# class method from C, using embedded mono (as described here). I can invoke the method, but the C# function receives 0 as the argument, instead of the number I pass in. The C# function returns a result and the C code is seeing the correct result - I just can't pass arguments in. What am I doing wrong?
The C# assembly (MonoSide.cs) is:
using System;
public class MainEntryPoint
{
static public void Main(string[] args)
{
}
}
namespace Fibonacci
{
public class Fibonacci
{
public long FibonacciNumber(long entryNumber)
{
Console.Write(string.Format("(inside C#) FibonacciNumber({0})", entryNumber));
var sqrt5 = Math.Sqrt(5);
var phi = (1 + sqrt5) / 2;
var exp = Math.Pow(phi, entryNumber);
var sign = ((entryNumber & 1) == 0) ? -1 : 1;
var entry = (exp + sign / exp) / sqrt5;
Console.WriteLine(string.Format(" = {0}.", entry));
return (long) entry;
}
}
}
Here is the C code:
#include <stdio.h>
#include <stdlib.h>
#include <mono/jit/jit.h>
#include <mono/metadata/assembly.h>
#include <mono/metadata/debug-helpers.h>
int main(int argc, char **argv)
{
long long entryNumber = (argc > 1) ? atoi(argv[1]) : 10;
// For brevity, null checks after each mono call are omitted.
MonoDomain *domain = mono_jit_init("MainEntryPoint");
MonoAssembly *monoAssembly = mono_domain_assembly_open(domain, "MonoSide.exe");
char *monoArgs[] = {"Mono"};
mono_jit_exec (domain, monoAssembly, 1, monoArgs);
MonoImage * monoImage = mono_assembly_get_image (monoAssembly);
MonoClass * monoClass = mono_class_from_name (monoImage, "Fibonacci", "Fibonacci");
MonoMethod *monoMethod = mono_class_get_method_from_name(monoClass, "FibonacciNumber", 1);
// Invoking method via thunk.
typedef long long (*FibonacciNumber) (long long *);
FibonacciNumber fibonacciNumber = mono_method_get_unmanaged_thunk (monoMethod);
printf("Calling C# thunk function FibonacciNumber(%I64u)...\n", entryNumber);
long long number = fibonacciNumber(&entryNumber);
printf("Fibonacci number %I64u = %I64u\n", entryNumber, number);
mono_jit_cleanup (domain);
return 0;
}
I am compiling it with Dev-Cpp using this makefile:
test.exe: CSide.c MonoSide.exe
gcc CSide.c -o test.exe -m32 -mms-bitfields -IC:/Progra~2/Mono/Include/mono-2.0 -LC:/Progra~2/Mono/lib -L/Embedded -lmono-2.0 -lmono
MonoSide.exe: MonoSide.cs
mcs MonoSide.cs
The output is:
Calling C# thunk function FibonacciNumber(10)...
(inside C#) FibonacciNumber(0) = 0.
Fibonacci number 10 = 0
(Why these functions? This is just a sample, can-I-get-this-to-work program and not my final goal.)
Edit:
It works if I pass the function argument as a pointer in the C code. The C# receives it correctly. The above code has been modified from:
typedef long long (*FibonacciNumber) (long long);
...
long long number = fibonacciNumber(entryNumber);
to:
typedef long long (*FibonacciNumber) (long long *);
...
long long number = fibonacciNumber(&entryNumber);
To me, this means that the safest way to pass anything more complicated between C and C# is via buffers, with matching serializers and deserializers in C and C#.
It works if I pass the function argument as a pointer in the C code. The C# receives it correctly. The above code has been modified from:
typedef long long (*FibonacciNumber) (long long);
...
long long number = fibonacciNumber(entryNumber);
to:
typedef long long (*FibonacciNumber) (long long *);
...
long long number = fibonacciNumber(&entryNumber);
To me, this means that the safest way to pass anything more complicated between C and C# is via buffers, with matching serializers and deserializers in C and C#.
I have a following c++ code which needs to convert to c#.
char tempMask;
tempMask = 0xff;
I am not too sure how to convert this, could some one please let me know what is the use of below line in c++
tempMask = 0xff;
thanks
It initializes tempMask with a byte containing value 0xFF
In C#, you can write
tempMask = (char)0xff;
or
tempMask = Convert.ToChar(0xff);
It's just a simple initialisation of a variable tempMask in a hexadecimal system. http://en.wikipedia.org/wiki/Hexadecimal
OxFF = 15*16^0+ 15*16^1 = 255 .
In C# you cannot init char with int value without explicit conversion but you can write : char tempMask = (char)0xFF;, or if you really want a 8-bit integer - try byte tempMask = 0xFF;
A signed char in C/C++ is not a 'char' in C#, it's an 'sbyte' - no cast required:
sbyte tempMask;
tempMask = 0xff;
I have a short-variable in C# and want to change a specific bit. How can I do it the easiest way?
Do you mean something like this?
public static short SetBit(short input, int bit)
{
return (short) (input | (1 << bit));
}
public static short ClearBit(short input, int bit)
{
return (short) (input & ~(1 << bit));
}
You could even make them extension methods if you want to.
Take a look at bitwise operators:
short i = 4;
short k = 1;
Console.WriteLine(i | k); //should write 5
You can see a list of the operators under the Logical (boolean and bitwise) section here.
Also, did some poking around and found this bitwise helper class. Might be worth checking out depending on your needs.