Byte conversion from 32 bit to 8 bit - c#

hi this I want to send this hex vales but i am getting error.. I am send byte values ,** my error.. constant vales cannot be converted to byte.**
class constant(){
public const byte MESSAGE_START = 0x1020; // command hex
}
public override IEnumerable<byte> ToBytes()
{
yield return constant.MESSAGE_START ;
}
HI, there is another twist i am facing, though with your help I passed the hex value, I should decimal equivalent when i pass through the below method, but value I get is 16.
protected override void TransmitCommand(Device device, Command command)
{
int count = 0;
foreach (var b in command.ToBytes())
transmitBuffer[count++] = b; // values i get is 16 but not decimal values 4128
}

As I said in the comment, the value MESSAGE_START is a short not a byte. Try this
class constant() {
public const short MESSAGE_START = 0x1020; // command hex
}
public override IEnumerable<byte> ToBytes()
{
yield return (byte) (constant.MESSAGE_START >> 8);
yield return (byte) (constant.MESSAGE_START & 0xff);
}
The above code assumes that the byte representation of the value is in network byte order (most significant byte first). For LSB do
public override IEnumerable<byte> ToBytes()
{
yield return (byte) (constant.MESSAGE_START & 0xFF);
yield return (byte) (constant.MESSAGE_START >> 8);
}

Related

C# Convert int to short and then to bytes and back to int

I'm trying to convert int to short and then to byte[] but I'm getting wrong values, i pass in 1 and get 256 what am i doing wrong?
this is the code:
//passing 1
int i = 1;
byte[] shortBytes = ShortAsByte((short)i);
//ii is 256
short ii = Connection.BytesToShort (shortBytes [0], shortBytes [1]);
public static byte[] ShortAsByte(short shortValue){
byte[] intBytes = BitConverter.GetBytes(shortValue);
if (BitConverter.IsLittleEndian) Array.Reverse(intBytes);
return intBytes;
}
public static short BytesToShort(byte byte1, byte byte2)
{
return (short)((byte2 << 8) + byte1);
}
The method ShortAsByte has the most significant bit at index 0 and the least significant at index 1, so the BytesToShort method is shifting a 1 instead of a 0. This means BytesToShort returns 256 (1 << 8 + 0 = 256) instead of 1 (0 << 8 + 1 = 1) .
Swap the byte variables in the return statement to get the correct result.
public static short BytesToShort(byte byte1, byte byte2)
{
return (short)((byte1 << 8) + byte2);
}
Also, props to you for taking endian-ness into consideration!

Converting String from SerialPort, converring to byte array, then being able to identify each byte individually

I have a byte array coming in through the SerialPort of gto 8 bytes. Each byte within the array means something different so I am looking for a wy to be able to label each byte to be interrogated later on in the program. I know the code below is not right, but I need to be able to interrogate each byte from byte0 up to byte7.
For example:
rxString = mySerialPort.ReadExisting();
byte[] bytes = Encoding.ASCII.GetBytes(rxString);
if (bytes.SequenceEqual(new byte[] { (byte0) = 0x95 }))
{
tb_Status.AppendText("Correct Sequence");
}
else
{
tb_Status.AppendText("Incorrect Sequence!!!");
}
Thanks
You should simply read the bytes into an array and access them by an index (0 to 7 as you said). If you interpret a special meaning on specific bytes, you should encapsulate the whole thing in a class and provide a named access to the array by properties like:
public short MyFancyData {
get {
return bytes[2] + (bytes[3] << 8);
}
}
public byte MyLessFancyData {
get {
return bytes[7];
}
}
public bool IsCorrect {
get {
return bytes[0] == 0x95;
}
}
// etc.
Is this getting close?
rxString = SerialPort.ReadExisting();
byte[] bytes = Encoding.ASCII.GetBytes(rxString);
var a = bytes[0];
var b = bytes[1];
if (a == 0x74)
{
tb_Status.AppendText("This is Good");
}

How to represent bits in structure in c#

I have a byte that represents two values.
First bit represents represents the sequence number.
The rest of the bits represent the actual content.
In C, I could easily parse this out by the following:
typedef struct
{
byte seqNumber : 1;
byte content : 7;
}
MyPacket;
Then I can easily case the input to MyPacket:
char* inputByte = "U"; // binary 01010101
MyPacket * myPacket = (MyPacket*)inputByte;
Then
myPacket->seqNumber = 1
myPacket->content = 42
How can I do the same thing in C#?
Thank you
kab
I would just use properties. Make getters and setters for the two parts that modify the appropriate bits in the true representation.
class myPacket {
public byte packed = 0;
public int seqNumber {
get { return value >> 7; }
set { packed = value << 7 | packed & ~(1 << 7); }
}
public int content {
get { return value & ~(1 << 7); }
set { packed = packed & (1 << 7) | value & ~(1 << 7); }
}
}
C# likes to keep its types simple, so I am betting this is the closest you are getting. Obviously it does not net you the performance improvement in C, but it salvages the meanings.

Byte formatting

Hello I'am new to using bytes in C#.
Say if I want to compare bytes based on the forms 0xxxxxxx and 1xxxxxxx. How would I get that first value for my comparison and at the same time remove it from the front?
Any help will be greatly appreciated.
Not sure I understand, but in C#, to write the binaray number 1000'0000, you must use hex notation. So to check if the left-most (most significant) bits of two bytes match, you can do e.g.
byte a = ...;
byte b = ...;
if ((a & 0x80) == (b & 0x80))
{
// match
}
else
{
// opposite
}
This uses bit-wise AND. To clear the most significant bit, you may use:
byte aModified = (byte)(a & 0x7f);
or if you want to assign back to a again:
a &= 0x7f;
You need to use binary operations like
a&10000
a<<1
This will check two bytes and compare each bit. If the bit is the same, it will clear that bit.
static void Main(string[] args)
{
byte byte1 = 255;
byte byte2 = 255;
for (var i = 0; i <= 7; i++)
{
if ((byte1 & (1 << i)) == (byte2 & (1 << i)))
{
// position i in byte1 is the same as position i in byte2
// clear bit that is the same in both numbers
ClearBit(ref byte1, i);
ClearBit(ref byte2, i);
}
else
{
// if not the same.. do something here
}
Console.WriteLine(Convert.ToString(byte1, 2).PadLeft(8, '0'));
}
Console.ReadKey();
}
private static void ClearBit(ref byte value, int position)
{
value = (byte)(value & ~(1 << position));
}
}

C# Language: Changing the First Four Bits in a Byte

In order to utilize a byte to its fullest potential, I'm attempting to store two unique values into a byte: one in the first four bits and another in the second four bits. However, I've found that, while this practice allows for optimized memory allocation, it makes changing the individual values stored in the byte difficult.
In my code, I want to change the first set of four bits in a byte while maintaining the value of the second four bits in the same byte. While bitwise operations allow me to easily retrieve and manipulate the first four bit values, I'm finding it difficult to concatenate this new value with the second set of four bits in a byte. The question is, how can I erase the first four bits from a byte (or, more accurately, set them all the zero) and add the new set of 4 bits to replace the four bits that were just erased, thus preserving the last 4 bits in a byte while changing the first four?
Here's an example:
// Changes the first four bits in a byte to the parameter value
public void changeFirstFourBits(byte newFirstFour)
{
// If 'newFirstFour' is 0101 in binary, make 'value' 01011111 in binary, changing
// the first four bits but leaving the second four alone.
}
private byte value = 255; // binary: 11111111
Use bitwise AND (&) to clear out the old bits, shift the new bits to the correct position and bitwise OR (|) them together:
value = (value & 0xF) | (newFirstFour << 4);
Here's what happens:
value : abcdefgh
newFirstFour : 0000xyzw
0xF : 00001111
value & 0xF : 0000efgh
newFirstFour << 4 : xyzw0000
(value & 0xF) | (newFirstFour << 4) : xyzwefgh
When I have to do bit-twiddling like this, I make a readonly struct to do it for me. A four-bit integer is called nybble, of course:
struct TwoNybbles
{
private readonly byte b;
public byte High { get { return (byte)(b >> 4); } }
public byte Low { get { return (byte)(b & 0x0F); } {
public TwoNybbles(byte high, byte low)
{
this.b = (byte)((high << 4) | (low & 0x0F));
}
And then add implicit conversions between TwoNybbles and byte. Now you can just treat any byte as having a High and Low byte without putting all that ugly bit twiddling in your mainline code.
You first mask out you the high four bytes using value & 0xF. Then you shift the new bits to the high four bits using newFirstFour << 4 and finally you combine them together using binary or.
public void changeHighFourBits(byte newHighFour)
{
value=(byte)( (value & 0x0F) | (newFirstFour << 4));
}
public void changeLowFourBits(byte newLowFour)
{
value=(byte)( (value & 0xF0) | newLowFour);
}
I'm not really sure what your method there is supposed to do, but here are some methods for you:
void setHigh(ref byte b, byte val) {
b = (b & 0xf) | (val << 4);
}
byte high(byte b) {
return (b & 0xf0) >> 4;
}
void setLow(ref byte b, byte val) {
b = (b & 0xf0) | val;
}
byte low(byte b) {
return b & 0xf;
}
Should be self-explanatory.
public int SplatBit(int Reg, int Val, int ValLen, int Pos)
{
int mask = ((1 << ValLen) - 1) << Pos;
int newv = Val << Pos;
int res = (Reg & ~mask) | newv;
return res;
}
Example:
Reg = 135
Val = 9 (ValLen = 4, because 9 = 1001)
Pos = 2
135 = 10000111
9 = 1001
9 << Pos = 100100
Result = 10100111
A quick look would indicate that a bitwise and can be achieved using the & operator. So to remove the first four bytes you should be able to do:
byte value1=255; //11111111
byte value2=15; //00001111
return value1&value2;
Assuming newVal contains the value you want to store in origVal.
Do this for the 4 least significant bits:
byte origVal = ???;
byte newVal = ???
orig = (origVal & 0xF0) + newVal;
and this for the 4 most significant bits:
byte origVal = ???;
byte newVal = ???
orig = (origVal & 0xF) + (newVal << 4);
I know you asked specifically about clearing out the first four bits, which has been answered several times, but I wanted to point out that if you have two values <= decimal 15, you can combine them into 8 bits simply with this:
public int setBits(int upperFour, int lowerFour)
{
return upperFour << 4 | lowerFour;
}
The result will be xxxxyyyy where
xxxx = upperFour
yyyy = lowerFour
And that is what you seem to be trying to do.
Here's some code, but I think the earlier answers will do it for you. This is just to show some sort of test code to copy and past into a simple console project (the WriteBits method by be of help):
static void Main(string[] args)
{
int b1 = 255;
WriteBits(b1);
int b2 = b1 >> 4;
WriteBits(b2);
int b3 = b1 & ~0xF ;
WriteBits(b3);
// Store 5 in first nibble
int b4 = 5 << 4;
WriteBits(b4);
// Store 8 in second nibble
int b5 = 8;
WriteBits(b5);
// Store 5 and 8 in first and second nibbles
int b6 = 0;
b6 |= (5 << 4) + 8;
WriteBits(b6);
// Store 2 and 4
int b7 = 0;
b7 = StoreFirstNibble(2, b7);
b7 = StoreSecondNibble(4, b7);
WriteBits(b7);
// Read First Nibble
int first = ReadFirstNibble(b7);
WriteBits(first);
// Read Second Nibble
int second = ReadSecondNibble(b7);
WriteBits(second);
}
static int ReadFirstNibble(int storage)
{
return storage >> 4;
}
static int ReadSecondNibble(int storage)
{
return storage &= 0xF;
}
static int StoreFirstNibble(int val, int storage)
{
return storage |= (val << 4);
}
static int StoreSecondNibble(int val, int storage)
{
return storage |= val;
}
static void WriteBits(int b)
{
Console.WriteLine(BitConverter.ToString(BitConverter.GetBytes(b),0));
}
}

Categories