how can I declare 2 bytes hex in c# and compare in a "if" like this java code:
public static final int MSG_GENERAL_RESPONSE = 0x8001;
int type = buf.readUnsignedShort();
if (type == MSG_TERMINAL_REGISTER) {
}
c# 2 bytes is not possible in c#? I tried and havent found a way. How can I translate this code to c#?
It would be OK to use int in your case (that is a signed 32-bit integer type), but it looks like ushort (unsigned 16-bit) is more precise here:
public const ushort MSG_GENERAL_RESPONSE = 0x8001;
// ...
ushort type = buf.readUnsignedShort();
if (type == MSG_TERMINAL_REGISTER) {
}
Note that if you want to give a negative literal in hexadecimal (with the convention that when the leading digit is from 8 through F, then it is two's complement), you need the following clumsy notation:
// negative:
public const short MSG_GENERAL_RESPONSE = unchecked((short)0x8001);
You cannot use final in C#. For a class member, you can use static readonly or const (the latter is implicitly static). For a local variable you can use const.
Related
i need to convert some char to int value but bigger than 256. This is my function to convert int to char. I need reverse it
public static string chr(int number)
{
return ((char)number).ToString();
}
This function doesnt work - its returning only 0-256, ord(chr(i))==i
public static int ord(string str)
{
return Encoding.Unicode.GetBytes(str)[0];
}
The problem is that your ord function truncates the character of the string to the first byte, as interpreted by UNICODE encoding. This expression
Encoding.Unicode.GetBytes(str)[0]
// ^^^
returns the initial element of a byte array, so it is bound to stay within the 0..255 range.
You can fix your ord method as follows:
public static int Ord(string str) {
var bytes = Encoding.Unicode.GetBytes(str);
return BitConverter.ToChar(bytes, 0);
}
Demo
Since you don't care much about encodings and you directly cast an int to a char in your chr() function, then why dont you simply try the other way around?
Console.WriteLine((int)'\x1033');
Console.WriteLine((char)(int)("\x1033"[0]) == '\x1033');
Console.WriteLine(((char)0x1033) == '\x1033');
char is 2 bytes long (UTF-16 encoding) in C#
char c1; // TODO initialize me
int i = System.Convert.ToInt32(c1); // could be greater than 255
char c2 = System.Convert.ToChar(i); // c2 == c1
System.Convert on MSDN : https://msdn.microsoft.com/en-us/library/system.convert(v=vs.110).aspx
Characters and bytes are not the same thing in C#. The conversion between char and int is a simple one: (char)intValue or (int)myString[x].
I have an integer value. I want to convert it to the Base 64 value. I tried the following code.
byte[] b = BitConverter.GetBytes(123);
string str = Convert.ToBase64String(b);
Console.WriteLine(str);
Its giving the out put as "ewAAAA==" with 8 characters.
I convert the same value to base 16 as follows
int decvalue = 123;
string hex = decvalue.ToString("X");
Console.WriteLine(hex);
the out put of the previous code is 7B
If we do this in maths the out comes are same. How its differ? How can I get same value to Base 64 as well. (I found the above base 64 conversion in the internet)
The question is rather unclear... "How is it differ?" - well, in many different ways:
one is base-16, the other is base-64 (hence they are fundamentally different anyway)
one is doing an arithmetic representation; one is a byte serialization format - very different
one is using little-endian arithmetic (assuming a standard CPU), the other is using big-endian arithmetic
To get a comparable base-64 result, you probably need to code it manually (since Convert only support base-8, base-10, base-16 for arithmetic converts). Perhaps (note: not optimized):
static void Main()
{
string b64 = ConvertToBase64Arithmetic(123);
}
// uint because I don't care to worry about sign
static string ConvertToBase64Arithmetic(uint i)
{
const string alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
StringBuilder sb = new StringBuilder();
do
{
sb.Insert(0, alphabet[(int)(i % 64)]);
i = i / 64;
} while (i != 0);
return sb.ToString();
}
I need to get data bit width of a type. How can I get that?
For example how can I write a function as follows?
int x = 30;
Type t = x.GetType();
bool sign = IsSignedType(t); // int is signed type, so it's true
int width = GetWidth(t); // 32
For the size, you can use Marshal.SizeOf and multiply by the number of bits in a byte (hint: 8), though for the built-in value types it is probably easy enough and certainly faster to just use a case statement.
For the sign , I'd think bool sign = t == Math.Abs(t); would do.
EDIT:
To determine if it is a signed number, there is no built-in method, but there are only 3 5 of them:
public static class Application
{
public enum SignedEnum : int
{
Foo,
Boo,
Zoo
}
public enum UnSignedEnum : uint
{
Foo,
Boo,
Zoo
}
public static void Main()
{
Console.WriteLine(Marshal.SizeOf(typeof(Int32)) * 8);
Console.WriteLine(5.IsSigned());
Console.WriteLine(((UInt32)5).IsSigned());
Console.WriteLine((SignedEnum.Zoo).IsSigned());
Console.WriteLine((UnSignedEnum.Zoo).IsSigned());
Console.ReadLine();
}
}
public static class NumberHelper
{
public static Boolean IsSigned<T>(this T value) where T : struct
{
return value.GetType().IsSigned();
}
public static Boolean IsSigned(this Type t)
{
return !(
t.Equals(typeof(Byte)) ||
t.Equals(typeof(UIntPtr)) ||
t.Equals(typeof(UInt16)) ||
t.Equals(typeof(UInt32)) ||
t.Equals(typeof(UInt64)) ||
(t.IsEnum && !Enum.GetUnderlyingType(t).IsSigned())
);
}
}
#ChrisShain's answers the first part correctly. Assuming you can guarantee that t is a numeric type, to tell whether the type is signed or not you should be able to use expression trees to dynamically invoke the MaxValue const field on t, convert it to a bitarray and check to see if it uses the sign bit (or just use bitshift magic to test it without the conversion). I haven't done it this way but it should be doable. If you want an example, I can work through it.
Or do it the easy way with a switch statement (or series of ifs) like everyone else does.
In a callback function from a native library, I need to access an array of espeak_EVENT.
The problem is the UNION statement in the original C code:
typedef struct {
espeak_EVENT_TYPE type;
unsigned int unique_identifier; // message identifier (or 0 for key or character)
int text_position; // the number of characters from the start of the text
int length; // word length, in characters (for espeakEVENT_WORD)
int audio_position; // the time in mS within the generated speech output data
int sample; // sample id (internal use)
void* user_data; // pointer supplied by the calling program
union {
int number; // used for WORD and SENTENCE events. For PHONEME events this is the phoneme mnemonic.
const char *name; // used for MARK and PLAY events. UTF8 string
} id;
} espeak_EVENT;
I have
[StructLayout(LayoutKind.Explicit)]
public struct espeak_EVENT
{
[System.Runtime.InteropServices.FieldOffset(0)]
public espeak_EVENT_TYPE type;
[System.Runtime.InteropServices.FieldOffset(4)]
public uint unique_identifier; // message identifier (or 0 for key or character)
[System.Runtime.InteropServices.FieldOffset(8)]
public int text_position; // the number of characters from the start of the text
[System.Runtime.InteropServices.FieldOffset(12)]
public int length; // word length, in characters (for espeakEVENT_WORD)
[System.Runtime.InteropServices.FieldOffset(16)]
public int audio_position; // the time in mS within the generated speech output data
[System.Runtime.InteropServices.FieldOffset(20)]
public int sample; // sample id (internal use)
[System.Runtime.InteropServices.FieldOffset(24)]
public IntPtr user_data; // pointer supplied by the calling program
[System.Runtime.InteropServices.FieldOffset(32)]
public int number;
[System.Runtime.InteropServices.FieldOffset(32)]
[System.Runtime.InteropServices.MarshalAs(System.Runtime.InteropServices.UnmanagedType.LPStr)]
public string name;
}
And then
public static Int32 SynthCallback(IntPtr wav, Int32 numsamples, IntPtr eventsParameter)
{
if (wav == IntPtr.Zero)
return 0;
int j=0;
while(true)
{
System.IntPtr ptr = new IntPtr(
(
eventsParameter.ToInt64()
+ (j *
System.Runtime.InteropServices.Marshal.SizeOf(typeof(cEspeak.espeak_EVENT))
)
)
);
if(ptr == IntPtr.Zero)
Console.WriteLine("NULL");
cEspeak.espeak_EVENT events = (cEspeak.espeak_EVENT) System.Runtime.InteropServices.Marshal.PtrToStructure(ptr, typeof(cEspeak.espeak_EVENT));
if(events.type == cEspeak.espeak_EVENT_TYPE.espeakEVENT_SAMPLERATE)
{
Console.WriteLine("Heureka");
}
break;
//Console.WriteLine("\t\t header {0}: address={1}: offset={2}\n", j, info.dlpi_phdr, hdr.p_offset);
++j;
}
if(numsamples > 0)
{
byte[] wavbytes = new Byte[numsamples * 2];
System.Runtime.InteropServices.Marshal.Copy(wav, wavbytes, 0, numsamples*2);
bw.Write(wavbytes, 0, numsamples*2);
}
return 0;
}
But it always fails on
cEspeak.espeak_EVENT events = (cEspeak.espeak_EVENT) System.Runtime.InteropServices.Marshal.PtrToStructure(ptr, typeof(cEspeak.espeak_EVENT));
However, when I remove
[System.Runtime.InteropServices.FieldOffset(32)][System.Runtime.InteropServices.MarshalAs(System.Runtime.InteropServices.UnmanagedType.LPStr)]
public string name;
From espeak_event, then it works.
How can I make this work without removing the string in the union ?
I need to access it in the callback function.
Edit:
And btw, what happens to the field offsets if I let it run on x64, and the size of " public IntPtr user_data;" changes from 32 to 64 bit ?
Hm, thinking about it, is fieldoffset 32 correct ?
Seems I mixed-up the pointer size when thinking about x64.
That might very well be another bug.
Hm, union with int and char*, my guess is they never compiled it for x64.
Because sizof(int) is 32 bit on a x64 Linux system.
Declare name as IntPtr rather than string and then use Marshal.PtrToStringAnsi to get it into a string variable.
I'm ignoring the fact that the string contents are UTF-8. If your text is pure ASCII that's fine. If not then you need to copy to a byte[] array and then translate from UTF-8 with Encoding.UTF8.GetString.
Now this might sound overly work intensive, but whenever I'm a situation where I must access through a binding some C structure utilizing C specific features (like unions or void* pointers), I normally write a set of get/set functions from/to the target language native (marshalling) types in C and provide those through the binding and keep the structure itself opaque.
Can you first marshal a struct with only espeak_EVENT_TYPE type? Then you could choose one of two structs depending on what you expect to find in the contents of the union: one having only int number and one having only const char *name.
I have a block of code that i'm trying to covert from an old qt file into C# but i'm a little unclear what is going on in the struct within the union below. I'm not sure what the ':' does... i'm guessing it sets the size but could not find any documentation on this. Also since C# does not have unions what is the best practice for converting something like this. Thank you
union uAWord
{
uAWord()
: m_AWord(0) {}
struct sBcdAWord
{
quint32 m_O :8;
quint32 m_S :2;
quint32 m_D :18;
quint32 m_SS :3;
quint32 m_P :1;
}
sBcdAWord m_F;
quint32 m_AWord;
}
This is what is called BitFields. the portion sBcdWord is a 32 bit word, and each field is a portion of that word taking respectively 8,2,18,3,1 BIT:
So the word layout is as below:
Bit0-Bit7 m_0
Bit8-Bit9 m_S
Bit10-Bit27 m_D
Bit28-Bit30 m_ss
Bit31 m_P
How to port this in C# depends if you are convettually porting the code, or if you need to PInvoke. In the case of PInvoke the best solution is probably to map sBcdAWord as an Unit32, and create some accessor strategy to mask on reading writing. If it is a code port, use separeted properties would be good unless there is special needing in memory usage saving.
That syntax is used to declare bitfields. The number is the number of bits for that value. See for example http://publib.boulder.ibm.com/infocenter/macxhelp/v6v81/index.jsp?topic=%2Fcom.ibm.vacpp6m.doc%2Flanguage%2Fref%2Fclrc03defbitf.htm
A good conversion to C# depends on the case I guess. As long as you are not too space-conscious, I'd just keep all needed values in parallel in a class.
That initializes m_aWord to 0.
To answer your other question, in C# you'd likely want a struct, and you'd need to use attributes to get union-like behavior out of it.
This particular example could be somewhat like:
[StructLayout(LayoutKind.Explicit)]
struct uAWord {
[FieldOffset(0)]
private uint theWord = 0;
[FieldOffset(0)]
public int m_P;
[FieldOffset(1)]
public int m_S;
[FieldOffset(3)]
public int m_SS;
[FieldOffset(7)]
public int m_O;
[FieldOffset(18)]
public int m_D;
public uAWord(uint theWord){
this.theWord = theWord;
}
}
The LayoutKind.Explicit indicates you will tell it where in the memory to map each field and the FieldOffset(int) tells which bit to start each field on. See this for more details. You'd assign this struct by setting the uint theWord in the constructor, then each of the other properties would access a chunk starting at a different memory address.
Unfortunately, that actually isn't correct. You'll need to use properties and do some bitmasking/shifting to get it right. Like this:
struct uAWord {
private uint theWord = 0;
public int m_P {get {return (theWord & 0x01);}}
public int m_S {get {return (theWord & 0x02) << 2;}}
public int m_SS {get {return (theWord & 0x04) << 3;}}
public int m_0 {get {return (theWord & 0x18) << 6;}}
}