I'm trying to convert a byte array into hexadecimal value using Bitconverter class.
long hexValue = 0X780B13436587;
byte[] byteArray = BitConverter.GetBytes ( hexValue );
string hexResult = BitConverter.ToString ( byteArray );
now if I execute the above code line by line, this is what I see
I thought hexResult string would be same as hexValue (i.e. 780B13436587h) but what I get is different, am I missing something, correct me if I'm wrong.
Thanks!
Endianness.
BitConverter uses CPU-endianness, which for most people means: little-endian. When humans write numbers, we tend to write big-endian (broadly speaking: you write the thousands, then hundreds, then tens, then the digits). For a CPU, big-endian means that the most-significant byte is first and the least-significant byte is last. However, unless you're using an Itanium, your CPU is probably little-endian, which means that the most-significant byte is last, and the least-significant byte is first. The CPU is implemented such that this doesn't matter unless you are peeking inside raw memory - it will ensure that numeric and binary arithmetic still works the way you expect. However, BitConverter works by peeking inside raw memory - hence you see the reversed data.
If you want the value in big-endian format, then you'll need to:
do it manually in big-endian order
check the BitConverter.IsLittleEndian value, and if true:
either reverse the input bytes
or reverse the output
If you look closely, the bytes in the output from BitConverter are reversed.
To get the hex-string for a number, you use the Convert class:
Convert.ToString(hexValue, 16);
It is the same number but reversed.
BitConverter.ToString can return string representation in reversed order:
http://msdn.microsoft.com/en-us/library/3a733s97(v=vs.110).aspx
"All the elements of value are converted. The order of hexadecimal strings returned by the ToString method depends on whether the computer architecture is little-endian or big-endian."
Related
In C#, byte is the data type for 8-bit unsigned integers, so a byte[] should be an array of integers who are between 0 and 255, just like an char[] is an array of characters.
But most of time when I encounter byte[], I see byte[] is used as a contiguous chunk of memory for storing raw representation of data.
How do these two relate to each other?
thanks
Well, a byte as datatype is exactly what you already said, an unsigned integer between 0 and 255. Furthermore this type needs exactly - believe it or not - one byte in your memory, thus also the name. This is why most readers that read byte per byte store those information in a structure that fits exactly the size of a byte - the byte-datatype.
Background
First of all, I have some hexadecimal data... 0x3AD3FFD6. I have chosen to represent this data as an array of bytes as follows:
byte[] numBytes = { 0x3A, 0xD3, 0xFF, 0xD6 };
I attempt to convert this array of bytes into its single-precision floating point value by executing the following code:
float floatNumber = 0;
floatNumber = BitConverter.ToSingle(numBytes, 0);
I have calculated this online using this IEEE 754 Converter and got the following result:
0.0016174268
I would expect the output of the C# code to produce the same thing, but instead I am getting something like...
-1.406E+14
Question
Can anybody explain what is going on here?
The bytes are in the wrong order. BitConverter uses the endianness of the underlying system (computer architecture), make sure to use the right endianness always.
Quick Answer: You've got the order of the bytes in your numBytes array backwards.
Since you're programming in C# I assume you are running on an Intel processor and Intel processors are little endian; that is, they store (and expect) the least significant bytes first. In your numBytes array you are putting the most significant byte first.
BitConverter doesn't so much convert byte array data as interpret it as another base data type. Think of physical memory holding a byte array:
b0 | b1 | b2 | b3.
To interpret that byte array as a single precision float, one must know the endian of the machine, i.e. if the LSByte is stored first or last. It may seem natural that the LSByte comes last because many of us read that way, but for little endian (Intel) processors, that's incorrect.
I'm trying to write an index file that follows the format of a preexisting (and immutable) text file.
The file is fixed length, with 11 bytes of string (in ASCII) followed by 4 bytes of long for a total of 15 bytes per line.
Perhaps I'm being a bit dim, but is there an simple way to do this? I get the feeling I need to open up two streams to write one line - one for the string and one for the bytes - but that feels wrong.
Any hints?
You can use BitConverter to convert between an int/long and an array of bytes. This way you would be able to write eleven bytes followed by four bytes, followed by eleven more bytes, and so on.
byte[] intBytes = BitConverter.GetBytes(intValue); // returns 4-byte array
Converting to bytes: BitConverter.GetBytes(int).
Converting back to int: BitConverter.ToInt32(byte\[\], int)
If you are developing a cross-platform solution, keep in mind the following note from the documentation (thanks to uriDium for the comment):
The order of bytes in the array returned by the GetBytes method depends on whether the computer architecture is little-endian or big-endian.
I currently have the following:
N.Sockets.UdpClient UD;
UD = new N.Sockets.UdpClient();
UD.Connect("xxx.xxx.xxx.xxx", 5255);
UD.Send( data, data.Length );
How would I send data in hex? I cannot just save it straight into a Byte array.
Hex is just an encoding. It's simply a way of representing a number. The computer works with bits and bytes only -- it has no notion of "hex".
So any number, whether represented in hex or decimal or binary, can be encoded into a series of bytes:
var data = new byte[] { 0xFF };
And any hex string can be converted into a number (using, e.g. int.Parse()).
Things get more interesting when a number exceeds one byte: Then there has to be an agreement of how many bytes will be used to represent the number, and the order they should be in.
In C#, ints are 4 bytes. Internally, depending on the endianness of the CPU, the most significant byte (highest-valued digits) might be stored first (big-endian) or last (little-endian). Typically, big-endian is used as the standard for communication over the network (remember the sender and receiver might have CPUs with different endianness). But, since you are sending the raw bytes manually, I'll assume you are also reading the raw bytes manually on the other end; if that's the case, you are of course free to use any arbitrary format you like, providing that the client can understand that format unambiguously.
To encode an int in big-endian order, you can do:
int num = 0xdeadbeef;
var unum = (uint)num; // Convert to uint for correct >> with negative numbers
var data = new[] {
(byte)(unum >> 24),
(byte)(unum >> 16),
(byte)(unum >> 8),
(byte)(unum)
};
Be aware that some packets might never reach the client (this is the main practical difference between TCP and UDP), possibly leading to misinterpretation of the bytes. You should take steps to improve the robustness of your message-sending (e.g. by adding a checksum, and ignoring values whose checksums are invalid or missing).
Motivation:
I would like to convert hashes (MD5/SHA1 etc) into decimal integers for the purpose of making barcodes in Code128C.
For simplicity, I prefer all the resulting (large) numbers to be positive.
I am able to convert byte[] to BigInteger in C#...
Sample from what I have so far:
byte[] data;
byte[] result;
BigInteger biResult;
result = shaM.ComputeHash(data);
biResult = new BigInteger(result);
But (rusty CS here) am I correct that a byte array can always be interpreted in two ways:
(A): as a signed number
(B): as an unsigned number
Is it possible to make an UNSIGNED BigInteger from a byte[] in C#?
Should I simply prepend a 0x00 (zero byte) to the front of the byte[]?
EDIT:
Thank you to AakashM, Jon and Adam Robinson, appending a zero byte achieved what I needed.
EDIT2:
The main thing I should have done was to read the detailed doc of the BigInteger(byte[]) constructor, then I would have seen the sections about how to restrict to positive numbers by appending the zero byte.
The remarks for the BigInteger constructor state that you can make sure any BigInteger created from a byte[] is unsigned if you append a 00 byte to the end of the array before calling the constructor.
Note: the BigInteger constructor expects the array to be in little-endian order. Keep that in mind if you expect the resulting BigInteger to have a particular value.
Since .NET Core 2.1, BigInteger has a constructor with an optional parameter isUnsigned:
public BigInteger (ReadOnlySpan<byte> value, bool isUnsigned = false, bool isBigEndian = false);
Examining the documentation for the relevant BigInteger constructor, we see:
The individual bytes in the value
array should be in little-endian
order, from lowest-order byte to
highest-order byte
[...]
The constructor expects positive
values in the byte array to use
sign-and-magnitude representation, and
negative values to use two's
complement representation. In other
words, if the highest-order bit of the
highest-order byte in value is set,
the resulting BigInteger value is
negative. Depending on the source of
the byte array, this may cause a
positive value to be misinterpreted as
a negative value.
[...]
To prevent
positive values from being
misinterpreted as negative values, you
can add a zero-byte value to the end
of the array.
As other answers have pointed out, you should append a 00 byte to the end of the array to ensure the resulting BigInteger is positive.
According to the the BigInteger Structure (System.Numerics) MSDN Documentation
To prevent the BigInteger(Byte[]) constructor from confusing the two's complement representation of a negative value with the sign and magnitude representation of a positive value, positive values in which the most significant bit of the last byte in the byte array would ordinarily be set should include an additional byte whose value is 0.
Here's code to do it:
byte[] byteArray;
// ...
var bigInteger = new BigInteger(byteArray.Concat(new byte[] { 0 }).ToArray());
But (rusty CS here) am I correct that a byte array can always be interpreted in two ways: A: as a signed number B: as an unsigned number
What's more correct is that all numbers (by virtue of being stored in the computer) are basically a series of bytes, which is what a byte array is. It's not true to say that a byte array can always be interpreted as a signed or unsigned version of a particular numeric type, as not all numeric types have signed and unsigned versions. Floating point types generally only have signed versions (there's no udouble or ufloat), and, in this particular instance, there is no unsigned version of BigInteger.
So, in other words, no, it's not possible, but since BigInteger can represent an arbitrarily large integer value, you're not losing any range by virtue of its being signed.
As to your second question, you would need to append 0x00 to end end of the array, as the BigInteger constructor parses the values in little-endian byte order.