I'm trying to convert this C printf to C#
printf("%c%c",(x>>8)&0xff,x&0xff);
I've tried something like this:
int x = 65535;
char[] chars = new char[2];
chars[0] = (char)(x >> 8 & 0xFF);
chars[1] = (char)(x & 0xFF);
But I'm getting different results.
I need to write the result to a file
so I'm doing this:
tWriter.Write(chars);
Maybe that is the problem.
Thanks.
In .NET, char variables are stored as unsigned 16-bit (2-byte) numbers ranging in value from 0 through 65535. So use this:
int x = (int)0xA0FF; // use differing high and low bytes for testing
byte[] bytes = new byte[2];
bytes[0] = (byte)(x >> 8); // high byte
bytes[1] = (byte)(x); // low byte
If you're going to use a BinaryWriter than just do two writes:
bw.Write((byte)(x>>8));
bw.Write((byte)x);
Keep in mind that you just performed a Big Endian write. If this is to be read as an 16-bit integer by something that expects it in Little Endian form, swap the writes around.
Ok,
I got it using the Mitch Wheat suggestion and changing the TextWriter to BinaryWriter.
Here is the code
System.IO.BinaryWriter bw = new System.IO.BinaryWriter(System.IO.File.Open(#"C:\file.ext", System.IO.FileMode.Create));
int x = 65535;
byte[] bytes = new byte[2];
bytes[0] = (byte)(x >> 8);
bytes[1] = (byte)(x);
bw.Write(bytes);
Thanks to everyone.
Especially to Mitch Wheat.
Related
I'm looking for C# method that's equivalent to PHP pack()
I've found lot of articles about this on Google, but the result is always different than using my PHP code when I've tried some code.
I have no idea.
Here's my PHP code which I'd like to transfer in C# code.
$binaryMagic = pack("n", 0xbabe);
This code should give you the same result:
string hex = "babe";
byte[] bytes = new byte[hex.Length / 2];
for (int i = 0; i < hex.Length; i += 2) {
bytes[i/2] = Convert.ToByte(hex.Substring(i, 2), 16);
}
string converted = System.Text.Encoding.UTF8.GetString(bytes, 0, bytes.Length);
Console.WriteLine(converted);
I have written the following code in C# to get 16-bit twos compliment values over uart from 8 bit microcontroller. I am receiving data in bytes form. So I am combining the two bytes to form a 16-bit value. My issue is that my all values are becoming negative, but in reality some are negative some are not. Please tell me what is wrong of the following code that I am doing:
int t = 0;
int bytes = serialPort1.BytesToRead;
byte[] buffer = new byte[bytes];
serialPort1.Read(buffer, 0, bytes);
float [] buffer2 = new float[bytes];
for(int i=0;t< buffer.Length;i++)
{
buffer2[i]= ~(((buffer[t]<< 8) | buffer[t+1]) - 1);
t = t + 2;
}
The main problem is that I recive a binary number with only 10 bits in use from a SerialPort so I use this to receive the complete data:
byte[] buf = new byte[2];
serialPort.Read(buf, 0, buf.Length);
BitArray bits = new BitArray(buf);
The original idea for convert binary to int was this:
foreach (bool b in bits)
{
if(b){
binary += "1";
}
else{
binary+= "0";
}
}
decimal = Convert.ToInt32(binary, 2);
decimal = decimal >> 6;
binary is obviously a string, that works but I need to know if exists another solution, instead of the previuos code I try with this:
decimal = BitConverter.ToInt16(buf, 0);
But this only read the first 8 bits, I need the other 2 bits missing! If I change ToInt16 for a ToInt32
decimal = BitConverter.ToInt32(buf, 0);
The program stops for a System.ArgumentException: Destination array was not long enough...
What can I do?
You can just shift the values in the bytes so that they match, and put them together. If I got the use of bits right, that would be:
int value = (buf[0] << 2) | (buf[1] >> 6);
I'm trying to convert a bit of VC 6.0 C++ code to C#. Specifically, I'm parsing through a binary dat file and I've run into a problem converting this bit of code:
ar.GetFile()->Read(buf,sizeof(int));
memmove(&x,buf,4);
pEBMA->before_after = static_cast<enum EBMA_Reserve>(x);
pEBMA->method = static_cast<enum EBMA_Method>(x >> 4);
Here is some related code.
struct EBMA_Data *pEBMA = &EBMA_data;
typedef CArray<struct EBMA_Data,struct EBMA_Data&> EBMA_data;
enum EBMA_Reserve
{EBMA_DONT_RESERVE,
EBMA_BEFORE,
EBMA_AFTER
};
enum EBMA_Method
{EBMA_CENTER,
EBMA_ALL_MATERIAL,
EBMA_FRACTION,
EBMA_RESERVE
};
struct EBMA_Data
{double reserved;
double fraction;
enum EBMA_Method method : 4;
enum EBMA_Reserve before_after : 4;
};
I've read this thread here Cast int to Enum in C#, but my code isn't giving me the same results as the legacy program.
Here is some of my code in C#:
reserved = reader.ReadDouble();
fraction = reader.ReadDouble();
beforeAfter = (EBMAReserve)Enum.ToObject(typeof(EBMAReserve), x);
method = (EBMAMethod)Enum.ToObject(typeof(EBMAMethod), (x >> 4));
I do have an endianness problem so I am reversing the endianness like so.
public override double ReadDouble()
{
byte[] b = this.ConvertByteArrayToBigEndian(base.ReadBytes(8));
double d = BitConverter.ToDouble(b, 0);
return d;
}
private byte[] ConvertByteArrayToBigEndian(byte[] b)
{
if (BitConverter.IsLittleEndian)
{
Array.Reverse(b);
}
return b;
}
So then I thought that maybe the endianness issue was still throwing me off so here is another attempt:
byte[] test = reader.ReadBytes(8);
Array.Reverse(test);
int test1 = BitConverter.ToInt32(buffer, 0);
int test2 = BitConverter.ToInt32(buffer, 4);
beforeAfter = (EBMAReserve)test1;
method = (EBMAMethod)test2;
I hope I've given enough details about what I'm trying to do.
EDIT:
This is how I solved my issue, apparently the values I needed were stored in the first byte of a 4 byte segment in the binary file. This is in a loop.
byte[] temp = reader.ReadBytes(4);
byte b = temp[0];
res = (EBMAReserve)(b & 0x0f);
meth = (EBMAMethod)(b >> 4);
EDIT: It actually looks like the structure size of EBMA_Data is 17 bytes.
struct EBMA_DATA
{
double reserved; //(8 bytes)
double fraction; //(8 bytes)
enum EBMA_Method method : 4; //(this is packed to 4 bits, not bytes)
enum EMBA_Reserve before_after : 4; //(this too, is packed to 4 bits)
}
so your read code should look something more like this:
EBMA_Data data = new EBMA_Data;
data.reserved = reader.ReadDouble();
data.fraction = reader.ReadDouble();
byte b = reader.ReadByte();
data.method = (EBMAMethod)(b >> 4);
data.before_after = (EBMAReserve)(b & 0x0f);
Not 100% sure, but it looks like the code that does the shift x >> 4 bytes may be the underlying issue that's being overlooked. If the EBMAReserve is the lower 4 bits of x and EBMAMethod is the top 4 bits, maybe this code would work?
EBMAReserve res = (EBMAReserve)(x & 0x0f);
EBMAMethod meth = (EBMAMethod)(x >> 4);
I think that is what the : 4 means after the enumerations in the struct, it's packing the two enums into the structure as a single byte instead of 2 bytes.
I have this function in C# to convert a little endian byte array to an integer number:
int LE2INT(byte[] data)
{
return (data[3] << 24) | (data[2] << 16) | (data[1] << 8) | data[0];
}
Now I want to convert it back to little endian..
Something like
byte[] INT2LE(int data)
{
// ...
}
Any idea?
Thanks.
The BitConverter class can be used for this, and of course, it can also be used on both little and big endian systems.
Of course, you'll have to keep track of the endianness of your data. For communications for instance, this would be defined in your protocol.
You can then use the BitConverter class to convert a data type into a byte array and vice versa, and then use the IsLittleEndian flag to see if you need to convert it on your system or not.
The IsLittleEndian flag will tell you the endianness of the system, so you can use it as follows:
This is from the MSDN page on the BitConverter class.
int value = 12345678; //your value
//Your value in bytes... in your system's endianness (let's say: little endian)
byte[] bytes = BitConverter.GetBytes(value);
//Then, if we need big endian for our protocol for instance,
//Just check if you need to convert it or not:
if (BitConverter.IsLittleEndian)
Array.Reverse(bytes); //reverse it so we get big endian.
You can find the full article here.
Hope this helps anyone coming here :)
Just reverse it, Note that this this code (like the other) works only on a little Endian machine. (edit - that was wrong, since this code returns LE by definition)
byte[] INT2LE(int data)
{
byte[] b = new byte[4];
b[0] = (byte)data;
b[1] = (byte)(((uint)data >> 8) & 0xFF);
b[2] = (byte)(((uint)data >> 16) & 0xFF);
b[3] = (byte)(((uint)data >> 24) & 0xFF);
return b;
}
Just do it in reverse:
result[3]= (data >> 24) & 0xff;
result[2]= (data >> 16) & 0xff;
result[1]= (data >> 8) & 0xff;
result[0]= data & 0xff;
Could you use the BitConverter class? It will only work on little-endian hardware I believe, but it should handle most of the heavy lifting for you.
The following is a contrived example that illustrates the use of the class:
if (BitConverter.IsLittleEndian)
{
int someInteger = 100;
byte[] bytes = BitConverter.GetBytes(someInteger);
int convertedFromBytes = BitConverter.ToInt32(bytes, 0);
}
BitConverter.GetBytes(1000).Reverse<byte>().ToArray();
Depending on what you're actually doing, you could rely on letting the framework handle the details of endianness for you by using IPAddress.HostToNetworkOrder and the corresponding reverse function. Then just use the BitConverter class to go to and from byte arrays.
Try using BinaryPrimitives in System.Buffers.Binary, it has helper methods for reading and writing all .net primitives in both little and big endian form.
byte[] IntToLittleEndian(int data)
{
var output = new byte[sizeof(int)];
BinaryPrimitives.WriteInt32LittleEndian(output, data);
return output;
}
int LittleEndianToInt(byte[] data)
{
return BinaryPrimitives.ReadInt32LittleEndian(data);
}
public static string decimalToHexLittleEndian(int _iValue, int _iBytes)
{
string sBigEndian = String.Format("{0:x" + (2 * _iBytes).ToString() + "}", _iValue);
string sLittleEndian = "";
for (int i = _iBytes - 1; i >= 0; i--)
{
sLittleEndian += sBigEndian.Substring(i * 2, 2);
}
return sLittleEndian;
}
You can use this if you don't want to use new heap allocations:
public static void Int32ToFourBytes(Int32 number, out byte b0, out byte b1, out byte b2, out byte b3)
{
b3 = (byte)number;
b2 = (byte)(((uint)number >> 8) & 0xFF);
b1 = (byte)(((uint)number >> 16) & 0xFF);
b0 = (byte)(((uint)number >> 24) & 0xFF);
}