I want to convert a byte to an sbyte, without changing any bits.
byte b = 0x84;
sbyte sb = unchecked((sbyte)b);
Console.WriteLine("0x" + Convert.ToString(b, 16));
Console.WriteLine("0x" + Convert.ToString(sb, 16));
The result of this will be:
0x84
0xff84
I understand what the result means and why it happens. However, I cannot seem to find out what I can do to avoid this behaviour. How can I copy the actual binary value of a byte and get it inside an sbyte?
The bit's are not changing between b and sb at all. This behavior is coming from Convert.ToString(). There just isn't an overload of Convert.ToString() that takes an sbyte and a base. The closest match would be Convert.ToString Method (Int16, Int32). So sb is being sign extended to 16 bits.
Using ToString in general produce less surprising results and don't involce unexpected conversions (like sbyte -> short as described in shf301's answer).
byte b = 0x84;
sbyte sb = unchecked((sbyte)b);
Console.WriteLine("0x" + b.ToString("x"));
Console.WriteLine("0x" + sb.ToString("x"));
Or just use format directly:
String.Format("0x{0:x}",((unchecked ((sbyte)0x84))))
Related
How can I convert a numerical value (e.g. float, short, int, ...) to several byte values without having to allocate memory on the heap for an array, like System.BitConverter.GetBytes does?
Something like this:
public static void GetBytes(short input, out byte byte0, out byte byte1)
{
//demo code, just to show which results I need
var bytes = System.BitConverter.GetBytes(input);
byte0 = bytes[0];
byte1 = bytes[1];
}
Note: I'm restricted to .NET Framework 4.8 and therefore (I think) C# 7.3.
Just cast and shift?
public static void GetBytes(short input, out byte byte0, out byte byte1)
{
byte0 = (byte)input;
byte1 = (byte)(input >> 8);
}
Note that you can simply reverse the order for different endianness.
Note that if you are using a "checked" context, you would need to mask too:
public static void GetBytes(short input, out byte byte0, out byte byte1)
{
byte0 = (byte)(input & 0xFF);
byte1 = (byte)((input >> 8) & 0xFF);
}
(in an unchecked context, the cast to byte is sufficient by itself - additional bits are discarded)
If you are allowed to use unsafe code, then for non-integral value types (such as float, double and decimal) the fastest way is to use a pointer.
For example, for a double:
double x = 123.456;
unsafe
{
byte* p = (byte*) &x;
// Doubles have 8 bytes.
byte b0 = *p++;
byte b1 = *p++;
byte b2 = *p++;
byte b3 = *p++;
byte b4 = *p++;
byte b5 = *p++;
byte b6 = *p++;
byte b7 = *p;
}
You should be able see how to modify this for other types.
For integral types such as short, int and long you could use the approach given in the other answer from Mark Gravell - but you can also use the pointer approach above for integral types.
IMPORTANT: When using the pointer approach, endianness is significant! The order in which the bytes are assigned is the order in which they are stored in memory, which can differ by processor architecture.
Is there any case when this test could fail? Would it fail on BigEndian machine? Is ByteArrayToHexString LittleEndian and why (chars seem to be written from left to right, so it must be BigEndian)?
[Fact]
public void ToMd5Bytes2_ValidValue_Converted()
{
// Arrange
// Act
var bytes = new byte[] {16, 171, 205, 239};
var direct = ByteArrayToHexString(bytes);
var bitConverter = BitConverter.ToString(bytes).Replace("-", string.Empty);
var convert = Convert.ToHexString(bytes);
// Assert
Assert.Equal(direct, bitConverter);
Assert.Equal(bitConverter, convert);
}
public static string ByteArrayToHexString(byte[] Bytes)
{
StringBuilder Result = new StringBuilder(Bytes.Length * 2);
string HexAlphabet = "0123456789ABCDEF";
foreach (byte B in Bytes)
{
Result.Append(HexAlphabet[(int)(B >> 4)]);
Result.Append(HexAlphabet[(int)(B & 0xF)]);
}
return Result.ToString();
}
Only multi-byte values are affected by endianness, like Int32.
Arrays of bytes are not - they're already a defined sequence of single bytes.Of course, it matters how you retrieved this byte array - if it is the result of converting a multi-byte value, you must have done that conversion with the appropriate endianness. Otherwise you'd have to reverse each slice of your array which originally represented a multi-byte value.
Also, endianness does not happen on the bit level, a misconception I see every now and then.
In the comments, you mentioned the remarks sections of the BitConverter.ToString documentation:
All the elements of value are converted. The order of hexadecimal strings returned by the ToString method depends on whether the computer architecture is little-endian or big-endian.
Looking at the reference source, I do not see where endianness is having an effect on its operation on byte arrays. This comment is either outdated or misleading, and I've opened an issue on it.
Say you have a byte variable, and you assign it the decimal value of 102.
(byte myByteVariable = 102;)
Why is that possible? Shouldn't you have to provide an actual 8 bit value instead to avoid confusion? And how do I set a byte value by supplying bits instead?
And how do I set a byte value by supplying bits instead?
If you are setting the byte value explicitly, you can do so using the prefix 0x to indicate that you are doing so in hexadecimal, as an encoding for groups of 4 bits. E.g. the hex value of decimal 102 is 0x66, which is equivalent to 0110 0110.
But for supplying bits directly, Marc Gravell provides an option here for converting from binary to int using the Convert class, and similarly you can convert the string representation of a binary value into a byte using the Convert.ToByte method as:
byte b = Convert.ToByte("01100110", 2);
byte is a decimal type which represents a decimal number, it doesn't represent a bit field. So 102 is a normal value for it, because it's in the range of byte values (which is [0;255]). If you want to manipulate the bits, consider using BitArray or BitVector32.
A byte can store 8 bits (that is values from 0 to 255). A short stores 16 bits, an int 32, etc. Each integer type in C# simply allows the storage of a wider range of numbers. C# allows you to assign a whole number to any of these types.
In order to set each bit individually, you will need to use bitwise operators.
I actually wrote a library a while ago to handle this, allowing you to set each bit of the byte using data[x]. It is very similar to the BitArray class (which I'm not sure if I knew about when I made it)
The main idea of it is:
private byte data;
public void SetBit(int index, bool value)
{
if (value)
data = (byte)(data | (1 << index));
else
data = (byte)(data & ~(1 << index));
}
public bool GetBit(int index)
{
return ((data & (1 << index)) != 0);
}
I have an integer value. I want to convert it to the Base 64 value. I tried the following code.
byte[] b = BitConverter.GetBytes(123);
string str = Convert.ToBase64String(b);
Console.WriteLine(str);
Its giving the out put as "ewAAAA==" with 8 characters.
I convert the same value to base 16 as follows
int decvalue = 123;
string hex = decvalue.ToString("X");
Console.WriteLine(hex);
the out put of the previous code is 7B
If we do this in maths the out comes are same. How its differ? How can I get same value to Base 64 as well. (I found the above base 64 conversion in the internet)
The question is rather unclear... "How is it differ?" - well, in many different ways:
one is base-16, the other is base-64 (hence they are fundamentally different anyway)
one is doing an arithmetic representation; one is a byte serialization format - very different
one is using little-endian arithmetic (assuming a standard CPU), the other is using big-endian arithmetic
To get a comparable base-64 result, you probably need to code it manually (since Convert only support base-8, base-10, base-16 for arithmetic converts). Perhaps (note: not optimized):
static void Main()
{
string b64 = ConvertToBase64Arithmetic(123);
}
// uint because I don't care to worry about sign
static string ConvertToBase64Arithmetic(uint i)
{
const string alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
StringBuilder sb = new StringBuilder();
do
{
sb.Insert(0, alphabet[(int)(i % 64)]);
i = i / 64;
} while (i != 0);
return sb.ToString();
}
I am trying to convert some vb6 code to c# and I am struggling a bit.
I have looked at this page below and others similar, but am still stumped.
Why use hex?
vb6 code below:
Dim Cal As String
Cal = vbNull
For i = 1 To 8
Cal = Cal + Hex(Xor1 Xor Xor2)
Next i
This is my c# code - it still has some errors.
string Cal = null;
int Xor1 = 0;
int Xor2 = 0;
for (i = 1; i <= 8; i++)
{
Cal = Cal + Convert.Hex(Xor1 ^ Xor2);
}
The errors are:
Cal = Cal + Convert.Hex(Xor1 ^ Xor2 ^ 6);
Any advice as to why I cant get the hex to convert would be appreciated.
I suspect its my lack of understanding the .Hex on line 3 above and the "&H" on line 1/2 above.
Note: This answer was written at a point where the lines Xor1 = CDec("&H" + Mid(SN1, i, 1))
and Xor1 = Convert.ToDecimal("&H" + SN1.Substring(i, 1)); were still present in the question.
What's the &H?
In Visual Basic (old VB6 and also VB.NET), hexadecimal constants can be used by prefixing them with &H. E.g., myValue = &H20 would assign the value 32 to the variable myValue. Due to this convention, the conversion functions of VB6 also accepted this notation. For example, CInt("20") returned the integer 20, and CInt("&H20") returned the integer 32.
Your code example uses CDec to convert the value to the data type Decimal (actually, to the Decimal subtype of Variant) and then assigns the result to an integer, causing an implicit conversion. This is actually not necessary, using CInt would be correct. Apparently, the VB6 code was written by someone who did not understand that (a) the Decimal data type and (b) representing a number in decimal notation are two completely different things.
So, how do I convert between strings in hexadecimal notation and number data types in C#?
To convert a hexadecimal string into a number use
int number = Convert.ToInt32(hex, 16); // use this instead of Convert.ToDecimal
In C#, there's no need to pad the value with "&H" in the beginning. The second parameter,16, tells the conversion function that the value is in base 16 (i.e., hexadecimal).
On the other hand, to convert a number into its hex representation, use
string hex = number.ToString("X"); // use this instead of Convert.ToHex
What you are using, Convert.ToDecimal, does something completely different: It converts a value into the decimal data type, which is a special data type used for floating-point numbers with decimal precision. That's not what you need. Your other method, Convert.Hex simply does not exist.