sending array of sbytes through socket in client-server architecture C# - c#

I would like to send array of sbytes. a[2] and a[3] are numbers -100..100.
static void speed_control(Socket sock)
{
sbyte[] a = new sbyte[5];
a[0] = Convert.ToSByte('[');
a[1] = Convert.ToSByte(14);
a[2] = Convert.ToSByte(Convert.ToInt16(Console.ReadLine()));
a[3] = Convert.ToSByte(Convert.ToInt16(Console.ReadLine()));
a[4] = Convert.ToSByte(']');
sock.Send(a);
}
sock.Send(a) gives me this error: cannot convert from sbyte[] to byte[].
Is there any other simple way to send this kind of data?

Sockets send and receive data in binary representation.
Your -100...100 numbers are not the binary representation, they are the data themselves. So typically, you need to convert your numbers to binary and then send them.
If you don't want to use the standard way and really insist in sending doing it your way, then you can do this:
Numbers between 0 and 100 can be sent as is. Number between -1 and -100, can be converted to numbers between 101 and 200 and then sent. The other side must reverse the calculation. So you'll be using byte, but not as binary data.
However, in that case your example doesn't make any sense. You seem to be sending characters, so you will never get negative values, and you must use the standard way and just change:
sbyte[] a = new sbyte[5];
to:
byte[] a = new byte[5];
If that example doesn't really represent what you're actually doing, then please update your question and post a better example that clearly shows how you are getting numbers -100...100.

If Socket.Send wants byte[] you have to provide byte[]
static void speed_control(Socket sock) {
unchecked { // we don't want IntegerOverflow to be thrown on (byte) -100 and alike
sock.Send(new byte[] {
(byte) '[',
14,
(byte) Convert.ToSByte(Console.ReadLine()),
(byte) Convert.ToSByte(Console.ReadLine()),
(byte) ']'
});
}
}
Even if the actual range is -100..100 you can use byte, not sbyte if you just cast
sbyte s =
byte b = unchecked((byte)s);
...
sbyte s = unchecked((sbyte)b);
and let the system use binary complement:
-100 (sbyte) ~ 156 (byte)
-99 ~ 157
...
-1 ~ 255

Related

How do I add bits to a MemoryStream

So I've been trying to add bits of a value to a MemoryStream but the issue is I have no idea how. I've seen that it's used for performance when it comes to networking.
I know I want a function that takes the bit value and how many bits it takes to store that value. So for instance, to store the value 3 I would need to allocate 2 bits 0000 0000 0000 0011. I would essentially pack the bits into a byte array and then add that byte array to the MemoryStream
var ms = new MemoryStream();
ms.WriteByte(1);
ms.WriteByte(1);
ms.WriteByte(1);
ms.WriteByte(1);
ms.WriteByte(1);
WriteBits(2, 3);
WriteBits(1, 1);
void WriteBits(int numbBits, int value)
{
/* Convert the "value" to a byte or bytes and add it to the MemoryStream */
}
How do I properly implement this?
Java Example
public void writeBits(int numBits, int value) {
int bytePos = bitPosition >> 3;
int bitOffset = 8 - (bitPosition & 7);
bitPosition += numBits;
for (; numBits > bitOffset; bitOffset = 8) {
buffer[bytePos] &= ~bitMaskOut[bitOffset]; // mask out the desired area
buffer[bytePos++] |= (value >> (numBits - bitOffset))
& bitMaskOut[bitOffset];
numBits -= bitOffset;
}
if (numBits == bitOffset) {
buffer[bytePos] &= ~bitMaskOut[bitOffset];
buffer[bytePos] |= value & bitMaskOut[bitOffset];
} else {
buffer[bytePos] &= ~(bitMaskOut[numBits] << (bitOffset - numBits));
buffer[bytePos] |= (value & bitMaskOut[numBits]) << (bitOffset - numBits);
}
}
So I've been trying to add bits of a value to a MemoryStream
You don't, MemoryStream only handles bytes.
So for instance, to store the value 3 I would need to allocate 2 bits
This would only be true if the range of values you want to store is [0, 3]. If you want the possibility of storing any larger value you need more bits.
How do I properly implement this?
you would need to implement your own bit-stream. The java example looks like it has a byte[] buffer, and a bitPosition. You would need to implement this. The bif-fiddeling code looks like it should work just about the same in c#. Once you have a byte[] it is trivial to write this out to whatever stream you want, and usually possible to send directly over the network.
I've seen that it's used for performance when it comes to networking
I think there is a significant misunderstanding here. While you could manually manipulate individual bits, in most cases it would just be a waste of (development) time.
In general, a better way to get good performance is to use existing, well optimized and designed libraries. And there are a variety of serialization libraries that converts objects to byte-streams for you. An example would be protobuf (.net), this actually encodes numbers with a variable number of bytes.
If you still need smaller data it is usually more efficient to use some form of compression. The old classic deflate usually gives good compromise between size and performance, while algorithms like lz4 prioritizes speed over compression ratio.
I had exactly the same problem and wrote an entire BitStream library which can handle any reads and writes of an arbitrary number of bits to a MemoryStream (and any other stream, too). The library is open-source, MIT-licensed and fast (https://github.com/martinweihrauch/BitStream).
Writing bits to a MemoryStream.
These are the steps to write a value to a certain number of bits to a specific position in the MemoryStream:
Have a Stream available, e. g. a MemoryStream(), to which you want to write.
Connect this Stream to a new Bitstream
using SharpBitStream;
uint[] testDataUnsigned = { 5, 62, 17, 50, 33 };
var ms = new MemoryStream();
var bs = new BitStream(ms);
Now, you can start writing to the BitStream like this:
foreach(var bits in testDataUnsigned)
{
bs.WriteUnsigned(6, (ulong)bits);
}
Writing can be done as above by only providing the bitlength and the value, but you of course also have full controll of exactly where to write the bits like so:
bs.WriteUnsigned(3, 2, 4, 5);
// Overloaded signature of WriteUnsigned:
// public void WriteUnsigned(long offsetByteStream, int offsetBit, int bitLength, ulong value)
// For signed numbers (e. g. -17), use
// bs.WriteSigned(3, 2, 4, -5);
This means, you can control that you write to the 4th byte (3, because starting at 0) in the underlying byte Stream,
starting from the the 3rd (=2) position of the byte with a length of 6 bits and the value 5 (=0b0101);
Reading works similarly:
Just read the next 6 bits, wherever your byte and bit position is (e. g. for loops, etc):
ulong number = bs.ReadUnsigned(6);
// For Signed, use
// long number = bs.ReadSigned(6);
Read a specific position, in this example read 4 bits from 3rd byte in Stream (2= 3rd position), starting with bit #0:
ulong number = bs.ReadUnsigned(2, 0, 4);
// For signed, use
// long number = bs.ReadSigned(2, 0, 4);
Note: The bit offset is always counting from 0 from the left-most position.

Read socket data Hex instead of Ascii

I am receiving data from a cnc machine every 5 seconds. Length of the data is 66 bytes. And every two byte has a special meaning according to the guide that I have. The device sends the data over socket to a specific ip and port. I have been told that I should read the data as hex instead of ascii.
This line of code returns
string data = Encoding.ASCII.GetString(data.buffer,0,66);
this;
"\0\u0004\0\u0001\0\0\0\0\0\0\0\0\0\0\0\0\0\r\0\r\0\0\0\0\0\0:a\u0002#\0?\0`\u001b?\u0015U\0\0\0\0\u0001\u0010\0\u0018\0\0\u000f\a\0\0\0\0\0\0\0\0\0\0\0\0\0\0u/"
and of course it is not useful to me.
I did tried to convert byte array to the hex string with that code;
StringBuilder sb = new StringBuilder();
foreach (byte b in buffer)
sb.Append(b.ToString("X2"));
string hexString = sb.ToString();
And got result as
00040001000000000000000000020000000000000000000000003A9D023F00A000601B841555000000000110001800000F070000000000000000000000000000752F
And when I try to convert this result as string, no success, nothing meaningfull.
GOAL
What I am trying to achieve is, read the incoming socket data as hex and use every two byte as a word to match a value. For example first 2 byte should match either 0 or 1. With i have it returns ? (question mark)
Thank you.
I have been told that I should read the data as hex instead of ascii
My gut feeling is this statement has been misquoted or misunderstood. There is no value in processing binary data as string hex representation just as there is no value in converting it to ascii... The only sane way to process binary data, is in binary unless you have a meaningful way to convert it.
You mention you need word (2byte) groupings, you could just convert this to an array of short, or ushort depending on your needs
var bytes = new byte[66];
var shortArray = new short[bytes.Length / 2];
Buffer.BlockCopy(bytes, 0, shortArray, 0, bytes.Length);
or
for (int i = 0; i < shortArray.Length; i++)
shortArray[i] = BitConverter.ToInt16(bytes[(i*2)..(i*2+2)]);
Disclaimer : This is just an example, be very careful of the endianess of your data, there are other ways to do this

Calculating bitwise inversion of char

I am trying to reverse engineering a serial port device that uses hdlc for its packet format.Based on the documentation, the packet should contain a bitwise inversion of the command(first 4 bytes), which in this case is "HELO". Monitoring the serial port when using the original program shows what the bitwise inversion should be:
HELO -> b7 ba b3 b0
READ -> ad ba be bb
The problem is, I am not getting values even remotely close.
public object checksum
{
get
{
var cmdDec = (int)Char.GetNumericValue((char)this.cmd);
return (cmdDec ^ 0xffffffff);
}
}
You have to work with bytes, not with chars:
string source = "HELO";
// Encoding.ASCII: I assume that the command line has ASCII encoded commands only
byte[] result = Encoding.ASCII
.GetBytes(source)
.Select(b => unchecked((byte)~b)) // unchecked: ~b returns int; can exceed byte.MaxValue
.ToArray();
Test (let's represent the result as hexadecimals)
// b7 ba b3 b0
Console.Write(string.Join(" ", result.Select(b => b.ToString("x2"))));
Char is not a byte. You should use bytes instead of chars.
So this.cmd is an array of bytes? You could use the BitConverter.ToUInt32()
PSEUDO: (you might fix some casting)
public uint checksum
{
get
{
var cmdDec = BitConverter.ToUInt32(this.cmd, 0);
return (cmdDec ^ 0xffffffff);
}
}
if this.cmd is a string you could get a byte array from it with Encoding.UTF8.GetBytes(string)
Your bitwise inversion isn't doing what you think it's doing. Take the following, for example:
int i = 5;
var j = i ^ 0xFFFFFFFF;
var k = ~i;
The first example is performing the inversion the way you are doing it, by XOR-ing the number with a max value. The second value uses the C# Bitwise-NOT ~ operator.
After running this code, j will be a long value equal to 4294967290, while k will be an int value equal to -6. Their binary representation will be the same, but j will include another 32 bits of 0's to go along with it. There's also the obvious problem of them being completely different numbers, so any math performed on the values will be completely different depending on what you are using.

Add PPOOE layer tp packet - convert length into byte

I have application that play Pcap files and i try to add function that wrap my packet with PPPOE layer.
so almost all done except large packets that i didn't understand yet how to set the new langth after add PPPOE layer.
For example this packet:
As you can see this packet length is 972 bytes (03 cc), and all i want is to convert it to decimal, after see this packet byte[] in my code i can see that this value converted into 3 and 204 in my packet byte[], so my question is how this calculation works ?
Those two bytes represents a short (System.Int16) in bigendian notation (most significant byte first).
You can follow two approaches to get the decimal value of those two bytes. One is with the BitConverter class, the other is by doing the calculation your self.
BitConverter
// the bytes
var bytes = new byte[] {3, 204};
// are the bytes little endian?
var littleEndian = false; // no
// What architecure is the BitConverter running on?
if (BitConverter.IsLittleEndian != littleEndian)
{
// reverse the bytes if endianess mismatch
bytes = bytes.Reverse().ToArray();
}
// convert
var value = BitConverter.ToInt16( bytes , 0);
value.Dump(); // or Console.WriteLine(value); --> 972
Calculate your self
base 256 of two bytes:
// the bytes
var bytes2 = new byte[] {3, 204};
// [0] * 256 + [1]
var value2 = bytes2[0] * 256 + bytes2[1]; // 3 * 256 + 204
value2.Dump(); // 972

C# integer masking into byte array

I'm confused as to why this isn't working, can someone please provide some insight?
I have a function who is taking in an integer value, but would like to store the upper two bits of the hex value into a byte array element.
Let's say if Distance is (24,135)10 or (5E47)16
public ConfigureReportOptionsMessageData(int Distance, int DistanceCheckTime)
{
...
this._data = new byte[9];
this._data[0] = (byte)(Distance & 0x00FF); // shows 47
this._data[1] = (byte)(Distance & 0xFF00); // shows 00
this._data[2] = (byte)(DistanceCheckTime & 0xFF);
...
}
this._data[1] = (byte)(Distance >> 8);
?
This seems like you should be using BitConverter.GetBytes - it will provide a much simpler option.
The reason you get 0 for _data[1] is that the upper 3 bytes are lost when you cast to byte.
Your intermediate result looks like this:
Distance && 0xff00 = 0x00005e00;
When this is converted to a byte, you only retain the low order byte:
(byte)0x00005e00 = 0x00;
You need to shift by 8 bits:
0x00005e00 >> 8 = 0x0000005e;
before you cast to byte and assign to _data[1]

Categories