I have an integer value. I want to convert it to the Base 64 value. I tried the following code.
byte[] b = BitConverter.GetBytes(123);
string str = Convert.ToBase64String(b);
Console.WriteLine(str);
Its giving the out put as "ewAAAA==" with 8 characters.
I convert the same value to base 16 as follows
int decvalue = 123;
string hex = decvalue.ToString("X");
Console.WriteLine(hex);
the out put of the previous code is 7B
If we do this in maths the out comes are same. How its differ? How can I get same value to Base 64 as well. (I found the above base 64 conversion in the internet)
The question is rather unclear... "How is it differ?" - well, in many different ways:
one is base-16, the other is base-64 (hence they are fundamentally different anyway)
one is doing an arithmetic representation; one is a byte serialization format - very different
one is using little-endian arithmetic (assuming a standard CPU), the other is using big-endian arithmetic
To get a comparable base-64 result, you probably need to code it manually (since Convert only support base-8, base-10, base-16 for arithmetic converts). Perhaps (note: not optimized):
static void Main()
{
string b64 = ConvertToBase64Arithmetic(123);
}
// uint because I don't care to worry about sign
static string ConvertToBase64Arithmetic(uint i)
{
const string alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
StringBuilder sb = new StringBuilder();
do
{
sb.Insert(0, alphabet[(int)(i % 64)]);
i = i / 64;
} while (i != 0);
return sb.ToString();
}
Related
Is there any case when this test could fail? Would it fail on BigEndian machine? Is ByteArrayToHexString LittleEndian and why (chars seem to be written from left to right, so it must be BigEndian)?
[Fact]
public void ToMd5Bytes2_ValidValue_Converted()
{
// Arrange
// Act
var bytes = new byte[] {16, 171, 205, 239};
var direct = ByteArrayToHexString(bytes);
var bitConverter = BitConverter.ToString(bytes).Replace("-", string.Empty);
var convert = Convert.ToHexString(bytes);
// Assert
Assert.Equal(direct, bitConverter);
Assert.Equal(bitConverter, convert);
}
public static string ByteArrayToHexString(byte[] Bytes)
{
StringBuilder Result = new StringBuilder(Bytes.Length * 2);
string HexAlphabet = "0123456789ABCDEF";
foreach (byte B in Bytes)
{
Result.Append(HexAlphabet[(int)(B >> 4)]);
Result.Append(HexAlphabet[(int)(B & 0xF)]);
}
return Result.ToString();
}
Only multi-byte values are affected by endianness, like Int32.
Arrays of bytes are not - they're already a defined sequence of single bytes.Of course, it matters how you retrieved this byte array - if it is the result of converting a multi-byte value, you must have done that conversion with the appropriate endianness. Otherwise you'd have to reverse each slice of your array which originally represented a multi-byte value.
Also, endianness does not happen on the bit level, a misconception I see every now and then.
In the comments, you mentioned the remarks sections of the BitConverter.ToString documentation:
All the elements of value are converted. The order of hexadecimal strings returned by the ToString method depends on whether the computer architecture is little-endian or big-endian.
Looking at the reference source, I do not see where endianness is having an effect on its operation on byte arrays. This comment is either outdated or misleading, and I've opened an issue on it.
A MAC address (Wikipedia article) is typically formatted in the form of 6 hexadecimal numbers separated by a semicolon, like 14:10:9F:D4:04:1A.
In C#, it can be passed around as a string, while some libraries manipulate these as a UInt64 or ulong.
Question
What are the relationship between the string, hex representation, ulong, and how can I go from one to the other?
MAC Address is HEX
As correctly described here:
The MAC address is very nearly a hex string. In fact, if you remove the ':' characters, you have a hex string.
14:10:9F:D4:04:1A literally means 0x14109FD4041A, only easier to read.
string to UInt64 and back
A MAC address is made up of 6 bytes, 48 bits, fitting in an UInt64 with 2 bytes to spare. Leaving out the MSB vs. LSB ordering complication, you can use the 2 methods below:
Format into a string
using System;
using System.Linq;
public static string MAC802DOT3(ulong macAddress)
{
return string.Join(":",
BitConverter.GetBytes(macAddress).Reverse()
.Select(b => b.ToString("X2"))).Substring(6);
}
// usage: var s = MAC802DOT3(0x14109fd4041a);
// var s = MAC802DOT3(22061633504282);
// s becomes "14:10:9F:D4:04:1A"
Convert to an integer
public static ulong MAC802DOT3(string macAddress)
{
string hex = macAddress.Replace(":", "");
return Convert.ToUInt64(hex, 16);
}
// usage: var m = MAC802DOT3("14:10:9F:D4:04:1A");
// m becomes 22061633504282 (0x14109fd4041a)
How can i get the numeric representation of a string in C#? To be clear, I do not want the address of the pointer, I do not want to parse an int from a string, I want the numeric representation of the value of the string.
The reason I want this is because I am trying to generate a hash code based on a file path (path) and a number (line). I essentially want to do this:
String path;
int line;
public override int GetHashCode() {
return line ^ (int)path;
}
I'm up to suggestions for a better method, but because I'm overriding the Equals() method for the type I'm creating (to check that both object's path and line are the same), I need to reflect that in the override of GetHashCode.
Edit: Obviously this method is bad, that has been pointed out to me and I get that. The answer below is perfect. However, it does not entirely answer my question. I still am curious if there is a simple way to get an integer representation of the value of a string. I know that I could iterate through the string, add the binary representation of that char to a StringBuffer and convert that string to an int, but is there a more clean way?
Edit 2: I'm aware that this is a strange and very limited question. Converting in this method limits the size of the string to 2 chars (2 16 bit char = 1 32 bit int), but it was the concept I was getting at, and not the practicality. Essentially, the method works, regardless of how obscure and useless it may be.
If all you want is a HashCode, why not get the hashcode of the string too? Every object in .net has a GetHashCode() function:
public override int GetHashCode() {
return line ^ path.GetHashCode();
}
For the purposes of GetHashCode, you should absolutely call GetHashCode. However, to answer the question as asked (after clarification in comments) here are two options, returning BigInteger (as otherwise you'd only get two characters in before probably overflowing):
static BigInteger ConvertToBigInteger(string input)
{
byte[] bytes = Encoding.BigEndianUnicode.GetBytes(input);
// BigInteger constructor expects a little-endian byte array
Array.Reverse(bytes);
return new BigInteger(bytes);
}
static BigInteger ConvertToBigInteger(string input)
{
BigInteger sum = 0;
foreach (char c in input)
{
sum = (sum << 16) + (int) c;
}
return sum;
}
(These two approaches give the same result; the first is more efficient, but the second is probably easier to understand.)
I am trying to convert some vb6 code to c# and I am struggling a bit.
I have looked at this page below and others similar, but am still stumped.
Why use hex?
vb6 code below:
Dim Cal As String
Cal = vbNull
For i = 1 To 8
Cal = Cal + Hex(Xor1 Xor Xor2)
Next i
This is my c# code - it still has some errors.
string Cal = null;
int Xor1 = 0;
int Xor2 = 0;
for (i = 1; i <= 8; i++)
{
Cal = Cal + Convert.Hex(Xor1 ^ Xor2);
}
The errors are:
Cal = Cal + Convert.Hex(Xor1 ^ Xor2 ^ 6);
Any advice as to why I cant get the hex to convert would be appreciated.
I suspect its my lack of understanding the .Hex on line 3 above and the "&H" on line 1/2 above.
Note: This answer was written at a point where the lines Xor1 = CDec("&H" + Mid(SN1, i, 1))
and Xor1 = Convert.ToDecimal("&H" + SN1.Substring(i, 1)); were still present in the question.
What's the &H?
In Visual Basic (old VB6 and also VB.NET), hexadecimal constants can be used by prefixing them with &H. E.g., myValue = &H20 would assign the value 32 to the variable myValue. Due to this convention, the conversion functions of VB6 also accepted this notation. For example, CInt("20") returned the integer 20, and CInt("&H20") returned the integer 32.
Your code example uses CDec to convert the value to the data type Decimal (actually, to the Decimal subtype of Variant) and then assigns the result to an integer, causing an implicit conversion. This is actually not necessary, using CInt would be correct. Apparently, the VB6 code was written by someone who did not understand that (a) the Decimal data type and (b) representing a number in decimal notation are two completely different things.
So, how do I convert between strings in hexadecimal notation and number data types in C#?
To convert a hexadecimal string into a number use
int number = Convert.ToInt32(hex, 16); // use this instead of Convert.ToDecimal
In C#, there's no need to pad the value with "&H" in the beginning. The second parameter,16, tells the conversion function that the value is in base 16 (i.e., hexadecimal).
On the other hand, to convert a number into its hex representation, use
string hex = number.ToString("X"); // use this instead of Convert.ToHex
What you are using, Convert.ToDecimal, does something completely different: It converts a value into the decimal data type, which is a special data type used for floating-point numbers with decimal precision. That's not what you need. Your other method, Convert.Hex simply does not exist.
I am really stumped on this one. In C# there is a hexadecimal constants representation format as below :
int a = 0xAF2323F5;
is there a binary constants representation format?
Nope, no binary literals in C#. You can of course parse a string in binary format using Convert.ToInt32, but I don't think that would be a great solution.
int bin = Convert.ToInt32( "1010", 2 );
As of C#7 you can represent a binary literal value in code:
private static void BinaryLiteralsFeature()
{
var employeeNumber = 0b00100010; //binary equivalent of whole number 34. Underlying data type defaults to System.Int32
Console.WriteLine(employeeNumber); //prints 34 on console.
long empNumberWithLongBackingType = 0b00100010; //here backing data type is long (System.Int64)
Console.WriteLine(empNumberWithLongBackingType); //prints 34 on console.
int employeeNumber_WithCapitalPrefix = 0B00100010; //0b and 0B prefixes are equivalent.
Console.WriteLine(employeeNumber_WithCapitalPrefix); //prints 34 on console.
}
Further information can be found here.
You could use an extension method:
public static int ToBinary(this string binary)
{
return Convert.ToInt32( binary, 2 );
}
However, whether this is wise I'll leave up to you (given the fact it will operate on any string).
Since Visual Studio 2017, binary literals like 0b00001 are supported.