Direct cast from char int to char in c# - c#

I have a following c++ code which needs to convert to c#.
char tempMask;
tempMask = 0xff;
I am not too sure how to convert this, could some one please let me know what is the use of below line in c++
tempMask = 0xff;
thanks

It initializes tempMask with a byte containing value 0xFF
In C#, you can write
tempMask = (char)0xff;
or
tempMask = Convert.ToChar(0xff);

It's just a simple initialisation of a variable tempMask in a hexadecimal system. http://en.wikipedia.org/wiki/Hexadecimal
OxFF = 15*16^0+ 15*16^1 = 255 .
In C# you cannot init char with int value without explicit conversion but you can write : char tempMask = (char)0xFF;, or if you really want a 8-bit integer - try byte tempMask = 0xFF;

A signed char in C/C++ is not a 'char' in C#, it's an 'sbyte' - no cast required:
sbyte tempMask;
tempMask = 0xff;

Related

In C# convert numbers to bytes without allocation

How can I convert a numerical value (e.g. float, short, int, ...) to several byte values without having to allocate memory on the heap for an array, like System.BitConverter.GetBytes does?
Something like this:
public static void GetBytes(short input, out byte byte0, out byte byte1)
{
//demo code, just to show which results I need
var bytes = System.BitConverter.GetBytes(input);
byte0 = bytes[0];
byte1 = bytes[1];
}
Note: I'm restricted to .NET Framework 4.8 and therefore (I think) C# 7.3.
Just cast and shift?
public static void GetBytes(short input, out byte byte0, out byte byte1)
{
byte0 = (byte)input;
byte1 = (byte)(input >> 8);
}
Note that you can simply reverse the order for different endianness.
Note that if you are using a "checked" context, you would need to mask too:
public static void GetBytes(short input, out byte byte0, out byte byte1)
{
byte0 = (byte)(input & 0xFF);
byte1 = (byte)((input >> 8) & 0xFF);
}
(in an unchecked context, the cast to byte is sufficient by itself - additional bits are discarded)
If you are allowed to use unsafe code, then for non-integral value types (such as float, double and decimal) the fastest way is to use a pointer.
For example, for a double:
double x = 123.456;
unsafe
{
byte* p = (byte*) &x;
// Doubles have 8 bytes.
byte b0 = *p++;
byte b1 = *p++;
byte b2 = *p++;
byte b3 = *p++;
byte b4 = *p++;
byte b5 = *p++;
byte b6 = *p++;
byte b7 = *p;
}
You should be able see how to modify this for other types.
For integral types such as short, int and long you could use the approach given in the other answer from Mark Gravell - but you can also use the pointer approach above for integral types.
IMPORTANT: When using the pointer approach, endianness is significant! The order in which the bytes are assigned is the order in which they are stored in memory, which can differ by processor architecture.

Convert byte to sbyte without changing bits in C#

I want to convert a byte to an sbyte, without changing any bits.
byte b = 0x84;
sbyte sb = unchecked((sbyte)b);
Console.WriteLine("0x" + Convert.ToString(b, 16));
Console.WriteLine("0x" + Convert.ToString(sb, 16));
The result of this will be:
0x84
0xff84
I understand what the result means and why it happens. However, I cannot seem to find out what I can do to avoid this behaviour. How can I copy the actual binary value of a byte and get it inside an sbyte?
The bit's are not changing between b and sb at all. This behavior is coming from Convert.ToString(). There just isn't an overload of Convert.ToString() that takes an sbyte and a base. The closest match would be Convert.ToString Method (Int16, Int32). So sb is being sign extended to 16 bits.
Using ToString in general produce less surprising results and don't involce unexpected conversions (like sbyte -> short as described in shf301's answer).
byte b = 0x84;
sbyte sb = unchecked((sbyte)b);
Console.WriteLine("0x" + b.ToString("x"));
Console.WriteLine("0x" + sb.ToString("x"));
Or just use format directly:
String.Format("0x{0:x}",((unchecked ((sbyte)0x84))))

C# byte array conversion to VB.NET

As per my last question I'm borrowing some code from the Opus project to integrate into VB.NET software.
Consider
byte[] buff = _encoder.Encode(segment, segment.Length, out len);
which I've translated to:
Dim buff(wavEnc.Encode(segment, segment.Length, len)) As Byte
It is throwing a:
Value of type '1-dimensional array of Byte' cannot be converted to 'Integer' error...
How can I fix this problem?
Try this:
Dim buff = wavEnc.Encode(segment, segment.Length, len)
Of course you can do a direct translation of the c#:
Dim buff As Byte() = wavEnc.Encode(segment, segment.Length, len)
No need for a type at all - let the compiler figure it out.
_encoder.Encode() is the right-hand side of an assignment. The left-hand side is a byte array.
The way you are using it in your VB sample is as an array dimensioner: an Integer.

How to convert C/C++ struct to C#

I want to write a plugin for a program. The program can only use C/C++ *.dll libraries. I want to write my plugin in C# though, so I thought I could just call my C# functions from the C++ dll through COM. This works fine but now I need to access a struct provided by the original program. In C++ this struct looks like this:
struct asdf{
char mc[64];
double md[10];
unsigned char muc[5];
unsigned char muc0 : 1;
unsigned char muc1 : 1;
unsigned char muc2 : 6;
unsigned char muc3;
another_struct st;
};
To be able to pass that struct to C# as a parameter I tried to built exactly the same struct in C#. I tried the following, but it gives me an access violation:
struct asdf{
char[] mc;
double[] md;
byte[] muc;
byte muc0;
byte muc1;
byte muc2;
byte muc3;
another_struct st;
};
What do I have to change?
If you want the arrays inline, you need to use fixed-size buffers. I'm assuming that the C char is a byte. The code to handle muc0, muc1, etc. will require some custom properties. You treat the entire thing like a byte.
struct asdf
{
public fixed byte mc[64];
public fixed double md[10];
public fixed byte muc[5];
private byte mucbyte;
// These properties extract muc0, muc1, and muc2
public byte muc0 { get { return (byte)(mucbyte & 0x01); } }
public byte muc1 { get { return (byte)((mucbyte >> 1) & 1); } }
public byte muc2 { get { return (byte)((mucbyte >> 2) & 0x3f); } }
public byte muc3;
public another_struct st;
};
I would change it slightly, use a string and make sure you init your arrays to the same size used in the C++ code when you use the struct in your program.
struct asdf{
string mc;
double[] md;
byte[] muc;
byte muc0;
byte muc1;
byte muc2;
byte muc3;
};

Setting ints to negative values using hexadecimal literals in C#

Is there any way to set an int to a negative value using a hexadecimal literal in C#? I checked the specification on Integer literals but it didn't mention anything.
For example:
int a = -1; // Allowed
int b = 0xFFFFFFFF; // Not allowed?
Hexadecimal notation is clearer for my application, and I'd prefer not to use uints because I would need to do some extra casting.
Use the unchecked keyword.
unchecked
{
int b = (int)0xFFFFFFFF;
}
or even shorter
int b = unchecked((int)0xFFFFFFFF);
i think you can use -0x1 with c#
You have to cast it and put it in an unchecked context.
unchecked {
int i = (int)0xFFFFFFFF;
}
This will get the job done without unchecked operations or casting.
int i = 0xFF << 24 | 0xFF << 16 | 0xFF << 8 | 0xFF;
You can use the "Convert" class:
string hex = "FF7F";
int myInt = Convert.ToInt32(hex, 16);

Categories