I currently have the following:
N.Sockets.UdpClient UD;
UD = new N.Sockets.UdpClient();
UD.Connect("xxx.xxx.xxx.xxx", 5255);
UD.Send( data, data.Length );
How would I send data in hex? I cannot just save it straight into a Byte array.
Hex is just an encoding. It's simply a way of representing a number. The computer works with bits and bytes only -- it has no notion of "hex".
So any number, whether represented in hex or decimal or binary, can be encoded into a series of bytes:
var data = new byte[] { 0xFF };
And any hex string can be converted into a number (using, e.g. int.Parse()).
Things get more interesting when a number exceeds one byte: Then there has to be an agreement of how many bytes will be used to represent the number, and the order they should be in.
In C#, ints are 4 bytes. Internally, depending on the endianness of the CPU, the most significant byte (highest-valued digits) might be stored first (big-endian) or last (little-endian). Typically, big-endian is used as the standard for communication over the network (remember the sender and receiver might have CPUs with different endianness). But, since you are sending the raw bytes manually, I'll assume you are also reading the raw bytes manually on the other end; if that's the case, you are of course free to use any arbitrary format you like, providing that the client can understand that format unambiguously.
To encode an int in big-endian order, you can do:
int num = 0xdeadbeef;
var unum = (uint)num; // Convert to uint for correct >> with negative numbers
var data = new[] {
(byte)(unum >> 24),
(byte)(unum >> 16),
(byte)(unum >> 8),
(byte)(unum)
};
Be aware that some packets might never reach the client (this is the main practical difference between TCP and UDP), possibly leading to misinterpretation of the bytes. You should take steps to improve the robustness of your message-sending (e.g. by adding a checksum, and ignoring values whose checksums are invalid or missing).
Related
Background
First of all, I have some hexadecimal data... 0x3AD3FFD6. I have chosen to represent this data as an array of bytes as follows:
byte[] numBytes = { 0x3A, 0xD3, 0xFF, 0xD6 };
I attempt to convert this array of bytes into its single-precision floating point value by executing the following code:
float floatNumber = 0;
floatNumber = BitConverter.ToSingle(numBytes, 0);
I have calculated this online using this IEEE 754 Converter and got the following result:
0.0016174268
I would expect the output of the C# code to produce the same thing, but instead I am getting something like...
-1.406E+14
Question
Can anybody explain what is going on here?
The bytes are in the wrong order. BitConverter uses the endianness of the underlying system (computer architecture), make sure to use the right endianness always.
Quick Answer: You've got the order of the bytes in your numBytes array backwards.
Since you're programming in C# I assume you are running on an Intel processor and Intel processors are little endian; that is, they store (and expect) the least significant bytes first. In your numBytes array you are putting the most significant byte first.
BitConverter doesn't so much convert byte array data as interpret it as another base data type. Think of physical memory holding a byte array:
b0 | b1 | b2 | b3.
To interpret that byte array as a single precision float, one must know the endian of the machine, i.e. if the LSByte is stored first or last. It may seem natural that the LSByte comes last because many of us read that way, but for little endian (Intel) processors, that's incorrect.
I'm trying to convert a byte array into hexadecimal value using Bitconverter class.
long hexValue = 0X780B13436587;
byte[] byteArray = BitConverter.GetBytes ( hexValue );
string hexResult = BitConverter.ToString ( byteArray );
now if I execute the above code line by line, this is what I see
I thought hexResult string would be same as hexValue (i.e. 780B13436587h) but what I get is different, am I missing something, correct me if I'm wrong.
Thanks!
Endianness.
BitConverter uses CPU-endianness, which for most people means: little-endian. When humans write numbers, we tend to write big-endian (broadly speaking: you write the thousands, then hundreds, then tens, then the digits). For a CPU, big-endian means that the most-significant byte is first and the least-significant byte is last. However, unless you're using an Itanium, your CPU is probably little-endian, which means that the most-significant byte is last, and the least-significant byte is first. The CPU is implemented such that this doesn't matter unless you are peeking inside raw memory - it will ensure that numeric and binary arithmetic still works the way you expect. However, BitConverter works by peeking inside raw memory - hence you see the reversed data.
If you want the value in big-endian format, then you'll need to:
do it manually in big-endian order
check the BitConverter.IsLittleEndian value, and if true:
either reverse the input bytes
or reverse the output
If you look closely, the bytes in the output from BitConverter are reversed.
To get the hex-string for a number, you use the Convert class:
Convert.ToString(hexValue, 16);
It is the same number but reversed.
BitConverter.ToString can return string representation in reversed order:
http://msdn.microsoft.com/en-us/library/3a733s97(v=vs.110).aspx
"All the elements of value are converted. The order of hexadecimal strings returned by the ToString method depends on whether the computer architecture is little-endian or big-endian."
I want to create a ASCII string which will have a number of fields. For e.g.
string s = f1 + "|" + f2 + "|" + f3;
f1, f2, f3 are fields and "|"(pipe) is the delimiter. I want to avoid this delimiter and keep the field count at the beginning like:
string s = f1.Length + f2.Length + f3.Length + f1 + f2 + f3;
All lengths are going to be packed in 2 chars, Max length = 00-99 in this case. I was wondering if I can pack the length of each field in 2 bytes by extracting bytes out of a short. This would allow me to have a range 0-65536 using only 2 bytes. E.g.
short length = 20005;
byte b1 = (byte)length;
byte b2 = (byte)(length >> 8);
// Save bytes b1 and b2
// Read bytes b1 and b2
short length = 0;
length = b2;
length = (short)(length << 8);
length = (short)(length | b1);
// Now length is 20005
What do you think about the above code, Is this a good way to keep the record lengths?
I cannot see what you are trying to achieve. short aka Int16 is 2 bytes - yes, so you can happily use it. But creating a string does not make sense.
short sh = 56100; // 2 bytes
I believe you mean, being able to output the short to a stream. For this there are ways:
BinaryWriter.Write(sh) which writes 2 bytes straight to the stream
BitConverter.GetBytes(sh) which gives you bytes of a short
Reading back you can use the same classes.
If you want ascii, i.e. "00" as characters, then just:
byte[] bytes = Encoding.Ascii.GetBytes(length.ToString("00"));
or you could optimise it if you want.
But IMO, if you are storing 0-99, 1 byte is plenty:
byte b = (byte)length;
If you want the range 0-65535, then just:
bytes[0] = (byte)length;
bytes[1] = (byte)(length >> 8);
or swap index 0 and 1 for endianness.
But if you are using the full range (of either single or double byte), then it isn't ascii nor a string. Anything that tries to read it as a string might fail.
Whether it's a good idea depends on the details of what it's for, but it's not likely to be good.
If you do this then you're no longer creating an "ASCII string". Those were your words, but maybe you don't really care whether it's ASCII.
You will sometimes get bytes with a value of 0 in your "string". If you're handling the strings with anything written in C, this is likely to cause trouble. You'll also get all sorts of other characters -- newlines, tabs, commas, etc. -- that may confuse software that's trying to work with your data.
The original plan of separating with (say) | characters will be more compact and easier for humans and software to read. The only obvious downsides are (1) you can't allow field values with a | in (or else you need some sort of escaping) and (2) parsing will be marginally slower.
If you want to get clever you could pack your 2 bytes into 1 where the value of byte 1 is <= 127, or if the value is >=128 you use 2 bytes instead. This technique looses you 1 bit, per byte that you are using, but if you normally have small values, but occasionally have larger values it dynamically grows to accommodate the value.
All you need to do is mark bit 8 with a value indicating that the 2nd byte is required to be read.
If bit 8 of the active byte is not set, it means you have completed your value.
EG
If you have a value of 4 then you use this
|8|7|6|5|4|3|2|1|
|0|0|0|0|0|1|0|0|
If you have a value of 128 you then can read the 1st byte check if bit 8 is high, and read the remaining 7 bits of the 1st byte, then you do the same with the 2nd byte, moving the 7bits left 7 bits.
|BYTE 0 |BYTE 1 |
|8|7|6|5|4|3|2|1|8|7|6|5|4|3|2|1|
|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|
I need to send an integer through a NetworkStream. The problem is that I only can send bytes.
Thats why I need to split the integer in four byte's and send those and at the other end convert it back to a int.
For now I need this only in C#. But for the final project I will need to convert the four bytes to an int in Lua.
[EDIT]
How about in Lua?
BitConverter is the easiest way, but if you want to control the order of the bytes you can do bit shifting yourself.
int foo = int.MaxValue;
byte lolo = (byte)(foo & 0xff);
byte hilo = (byte)((foo >> 8) & 0xff);
byte lohi = (byte)((foo >> 16) & 0xff);
byte hihi = (byte)(foo >> 24);
Also.. the implementation of BitConverter uses unsafe and pointers, but it's short and simple.
public static unsafe byte[] GetBytes(int value)
{
byte[] buffer = new byte[4];
fixed (byte* numRef = buffer)
{
*((int*) numRef) = value;
}
return buffer;
}
Try
BitConverter.GetBytes()
http://msdn.microsoft.com/en-us/library/system.bitconverter.aspx
Just keep in mind that the order of the bytes in returned array depends on the endianness of your system.
EDIT:
As for the Lua part, I don't know how to convert back. You could always multiply by 16 to get the same functionality of a bitwise shift by 4. It's not pretty and I would imagine there is some library or something that implements it. Again, the order to add the bytes in depends on the endianness, so you might want to read up on that
Maybe you can convert back in C#?
For Lua, check out Roberto's struct library. (Roberto is one of the authors of Lua.) It is more general than needed for the specific case in question, but it isn't unlikely that the need to interchange an int is shortly followed by the need to interchange other simple types or larger structures.
Assuming native byte order is acceptable at both ends (which is likely a bad assumption, incidentally) then you can convert a number to a 4-byte integer with:
buffer = struct.pack("l", value)
and back again with:
value = struct.unpack("l", buffer)
In both cases, buffer is a Lua string containing the bytes. If you need to access the individual byte values from Lua, string.byte is your friend.
To specify the byte order of the packed data, change the format from "l" to "<l" for little-endian or ">l" for big-endian.
The struct module is implemented in C, and must be compiled to a DLL or equivalent for your platform before it can be used by Lua. That said, it is included in the Lua for Windows batteries-included installation package that is a popular way to install Lua on Windows systems.
Here are some functions in Lua for converting a 32-bit two's complement number into bytes and converting four bytes into a 32-bit two's complement number. A lot more checking could/should be done to verify that the incoming parameters are valid.
-- convert a 32-bit two's complement integer into a four bytes (network order)
function int_to_bytes(n)
if n > 2147483647 then error(n.." is too large",2) end
if n < -2147483648 then error(n.." is too small",2) end
-- adjust for 2's complement
n = (n < 0) and (4294967296 + n) or n
return (math.modf(n/16777216))%256, (math.modf(n/65536))%256, (math.modf(n/256))%256, n%256
end
-- convert bytes (network order) to a 32-bit two's complement integer
function bytes_to_int(b1, b2, b3, b4)
if not b4 then error("need four bytes to convert to int",2) end
local n = b1*16777216 + b2*65536 + b3*256 + b4
n = (n > 2147483647) and (n - 4294967296) or n
return n
end
print(int_to_bytes(256)) --> 0 0 1 0
print(int_to_bytes(-10)) --> 255 255 255 246
print(bytes_to_int(255,255,255,246)) --> -10
investigate the BinaryWriter/BinaryReader classes
Convert an int to a byte array and display : BitConverter ...
www.java2s.com/Tutorial/CSharp/0280__Development/Convertaninttoabytearrayanddisplay.htm
Integer to Byte - Visual Basic .NET answers
http://bytes.com/topic/visual-basic-net/answers/349731-integer-byte
How to: Convert a byte Array to an int (C# Programming Guide)
http://msdn.microsoft.com/en-us/library/bb384066.aspx
As Nubsis says, BitConverter is appropriate but has no guaranteed endianness.
I have an EndianBitConverter class in MiscUtil which allows you to specify the endianness. Of course, if you only want to do this for a single data type (int) you could just write the code by hand.
BinaryWriter is another option, and this does guarantee little endianness. (Again, MiscUtil has an EndianBinaryWriter if you want other options.)
To convert to a byte[]:
BitConverter.GetBytes(int)
http://msdn.microsoft.com/en-us/library/system.bitconverter.aspx
To convert back to an int:
BitConverter.ToInt32(byteArray, offset)
http://msdn.microsoft.com/en-us/library/system.bitconverter.toint32.aspx
I'm not sure about Lua though.
If you are concerned about endianness use John Skeet's EndianBitConverter. I've used it and it works seamlessly.
C# supports their own implementation of htons and ntohs as:
system.net.ipaddress.hosttonetworkorder()
system.net.ipaddress.networktohostorder()
But they only work on signed int16, int32, int64 which means you'll probably end up doing a lot of unnecessary casting to make them work, and if you're using the highest order bit for anything other than signing the integer, you're screwed. Been there, done that. ::tsk:: ::tsk:: Microsoft for not providing better endianness conversion support in .NET.
I am trying to parse some output data from and PBX and I have found something that I can't really figure out.
In the documentation it says the following
Information for type of call and feature. Eight character for ’status information 3’ with following ASCII values in hexadecimal notation.
1. Character
Bit7 Incoming call
Bit6 Outgoing call
Bit5 Internal call
Bit4 CN call
2. Character
Bit3 Transferred call (transferring party inside)
Bit2 CN-transferred call (transferring party outside)
Bit1
Bit0
Any ideas how to interpret this? I have no raw data at the time to match against but I still need to figure it out.
Probably you'll receive two characters (hex digits: 0-9, A-F) First digit represents the hex value for the most significant 4 bits, next digit for the least significant 4 bits.
Example:
You will probably receive something like the string "7C" as hex representation of the bitmap: 01111100.
Eight character for ’status information 3’ with following ASCII values in hexadecimal notation.
If think this means the following.
You will get 8 bytes - one byte per line, I guess.
It is just the wrong term. They mean two hex digits per byte but call them characters.
So it is just a byte with bit flags - or more precisely a array of eight such bytes.
Bit
7 incoming
6 outgoing
5 internal
4 CN
3 transfered
2 CN transfered
1 unused?
0 unused?
You could map this to a enum.
[BitFlags]
public enum CallInformation : Byte
{
Incoming = 128,
Outgoing = 64,
Internal = 32,
CN = 16
Transfered = 8,
CNTransfered = 4,
Undefined = 0
}
Very hard without data. I'd guess that you will get two bytes (two ASCII characters), and need to pick them apart at the bit level.
For instance, if the first character is 'A', you will need to look up its character code (65, or hex 0x41), and then look at the bits. Of course the bits are the same regardless of decimal or hex, but its easer to do by hand in hex. 0x41 is bit 5 and bit 1 set, so that would be an "internal call". Bit 1 seems undocumented.
I'm not sure why it looks as if that would require two characters; it's only eight bits documented.